26 January 2017

How do you help a team become awesome?

Background

I raised a question, first during a Transpection Tuesday, then in the TestSverige Slack chat and finally with all sorts of people I've met; mostly software testers. The question was:

How do you help a team become awesome?

Awesome in this case refers to the kind of team where everyone seems comfortable; they laugh, they communicate, they do silly things but don't seem embarrassed and at the same time they seem productive, motivated and ever evolving with low employee turnover rate.

This is my summary of those discussions.

Before we start: This is not specifically for managers, team leads, scrum masters etc.; it's everyone's responsibility and opportunity; anyone can improve a team's "mood".

Personal attributes/attitudes

Personal attributes and attitudes came up a lot during the discussions and they seemed to be the foundation on which you can add helpful activities. All of these work as self reinforcing systems so if you start to set a positive direction others will (eventually) follow. The same applies if you set a negative direction though, as this will start to create a deeper and deeper hole to get out of.

So why don't we just act "good"? Because we're imperfect, also known as being human: We're scared, we sense injustice, we want revenge, we get stressed, angry or sad, we're sometime egocentric and so forth.

For these reasons there are a few things you need to consider for each of the attributes listed below:
  1. It'll take courage to set a new direction and you might get hurt... sometimes a lot
  2. You'll need to consciously monitor yourself to avoid stress etc. getting the better of you
  3. You'll need to nurture these attributes in the team primarily by making positive examples visible

So, without further ado; dare to...
  • Be vulnerable
    "My uncle used to say that we like people for their qualities but we love them for their defects."
    /John Myers, Hellboy

    Share your struggles, admit you're scared, open up, allow people to come close and dare to be imperfect (aka. human) in general.
       
  • Be transparent
    Share what you know and do, that's relevant to others even though this might make them question your decisions, force you to (temporarily) stop something or even use the information to personally attack you.
       
  • Be accountable
    When you've messed up, take responsibility, apologize if appropriate and accept the consequences. Sometimes it's even beneficial to take responsibility for things you weren't responsible for just to get out of a negative loop.
       
  • Appreciate
    Make it a habit to register when someone makes something good and tell them this. Make sure you're sincere, empty flatter is not helping. Another nice way to appreciate people is to be a proxy for appreciation e.g. "Just so you know, Bob gave a two minute speech this morning about how great he thought your design was".
       
  • Trust people
    People want to do good so do trust them. Sometimes they'll let you down, sometimes you might even get stabbed in the back but keep trusting them. With that being said, of course bad behavior should be dealt with, e.g. see "be sincere" below, but as soon as you stop trusting people you're heading in a bad direction. After all, if you don't trust people they'll never be able to show you they can be trusted starting a rather destructive loop. Also people grow with responsibility. Finally: trusting people does not mean not helping them and/or helping them realize they need help.
       
  • Be sincere
    Integrity is sexy; if you think someone, including yourself, is being singled out, is getting unfair criticism or for other reasons aren't treated in a fair way: Speak up! Especially when people aren't given a chance to defend themselves.

    However, stick to your observations not your interpretations. You don't know for sure if "this person is actively trying to hurt you" but you do know for instance that "the person was told to give you the latest version but you never got it". Sincere != Judgmental, quite the opposite actually.
       
  • Care about people
    Caring about people costs very little and the main risk you face is simply to be creepy. Do notice that care does not mean micromanage, instead it's about genuinely trying to create a good situation for others. Carita Jansson Tsiantes gave a lovely example in the TestSverige Slack chat that went something like:

    When you boil water to make a cup of tea, don't just think about yourself; prepare water for your colleagues who might want tea as well.
       
  • Help and support
    This can shortly be summarized as:
    "If someone has a problem, we have a problem".

    When asked for help do help and if people express frustration or confusion offer to help. Few people ask questions if they don't need to so rather than telling them "you should know that" try to help them learn how they can find the answer themselves; e.g. by introducing them to the right people, help them get access to some information system, help them get invited to a certain meeting/mailing list etc. An attitude to avoid is "it's not my job to help...". Sure this is sometimes true and you need to work too but then again: help the person help herself rather than ignore the request.
       
  • Respect everyone
    No job, role or person is more important than any other. Of course some tasks might be more important to finish but then focus on getting them solved as a team. A key aspect in this is understanding your colleagues' tasks, challenges, frustrations and talents. Andreas Cederholm brought up a great example of how to nurture this attitude:

    We run team test sessions where the whole team test together. Add some cookies and laughs and it'll work even better.
       
  • Try
    If you want to challenge status quo you'll have to try new things. Trying comes with an increased risk of failing and potentially making a fool of yourself but that's necessary and typically a great way to learn. Sometimes trying something you don't really believe in might still be beneficial simply to acknowledge that ideas are appreciated and that you trust in peoples judgement even when you might not agree with them.
       
  • Auto forgive
    A psychiatrist once told me a very smart thing about eating disorders and how to react when people have not been able to fight the decease (generally applicable of course):

    Guess who'll feel worst when this has happens? You? No, the person who just "failed"! You don't need to remind them they "let you down", they'll know and they'll feel terrible about it.

    People mess up, people take bad decisions, people have bad days. You rarely need to remind them, it's typically much more constructive to say "don't worry, shit happens, let's fix this" and move on. This is also important to nurture previously mentioned attitudes such as "try" and "be transparent"; if people are scared about potential consequences (including reactions) the only thing they'll try is to cover stuff up.
       
  • Smile (and laugh)
    Being met with a calm, warm smile is great medicine when you feel down or nervous about some bad news you have to deliver. Smiling also helps at least me stay calm making it a useful tool to manage feelings of anger or frustration.
         
I get the feeling all the attributes/attitudes above point back to some basic principle like "get unhelpful frustration off the table fast; both yours and others" or "always trust in peoples willingness to do good"... but I can't really put it into words. Feel free to help me.

Activities

If the personal attributes/attitudes are the foundation the various activities below represent important tools to speed up the process. Notice though that the activities by themselves are not silver bullets and overusing them or using them at the wrong time can actually have a negative impact. Focus on the list above first!
  • Social activities outside of work
    E.g. cook together, sports or boardgames. Activities where everyone is active which is not necessarily true for e.g. your typical after work.
        
  • Quirky things
    E.g. quote book, silly competitions, fun/silly "rules" or internal titles.
        
  • Retrospectives taken seriously
    Not specifically the meeting, can be e.g. a continuous, everyday team reflection activity. All problems brought up are dealt with. Problems are taken seriously even by members not personally impacted.
           
  • One on ones
    Allows people to raise concerns in a safe environment (assuming the person meeting members one on one has earned the members' respect).
        
  • Do each others work
    An example of this is Team Test Sessions where the team test together (suggested by Andreas Cederholm, TestSverige) or move the other direction and try mob programming with testers included. Everyone (product owner, developers, testers, designers...) together attending e.g. courses in security or usability could also help support as this kind of activities creates some common ground. Yet another suggestion is team members meeting customers, accompanying sales/support people etc.
         
  • Discussions about values
    E.g. take the "personal attributes/attitudes" list above and talk about each one described. Is this something you want to strive for in the team; can you change something to help nurture this behavior etc. Make it a team goal to improve and nurture the "mood" in the team in general.
        
  • Personal values
    Most of the personal attributes and attitudes require consistency. An activity where you sit down an state you personal "manifest", goals or values can be important. For instance it might be hard to treat yourself in a fair way without some guidelines; either turning you into an asshole demanding more from other than yourself or a "victim" never treating yourself well enough.
        
  • Clarify your attentions to your boss
    If you want to invest quite a bit of time in this, go to your boss, explain your intention and ask for her/his support. Making your boss, or if necessary, your boss' boss, an ally can provide access to several powerful tools (e.g. see "Supporting context" further down).
          

Symptoms

The list below represent "symptoms" that your team (or even company) is moving in the right direction:
  • People laugh.
       
  • You're met with a smile, even in bad times.
       
  • You know what your colleagues like, both at work and outside. E.g. their hobbies,interests, spare time activities, important life milestones, work and private goals, "hidden talents" and previous experience.
       
  • People talk about hobbies, spare time activities and the other things listed above.
        
  • Conflicts are taken seriously and navigated swiftly.
      
  • People blame themselves, if anyone, not others.
        
  • High level of motivation.
        
  • You rarely feel stupid (in a bad way).
        
  • Stuff that "should be done", gets done.
        
  • Ideas are taken seriously, people try new things and experiments are run frequently.
        
  • People admit mistakes and challenges early as they're not afraid of the consequences.
         
  • People meet outside work because they want to, not because they feel obligated to.
        
  • Few taboos.
         
  • Very limited "bullshit" or backtalk in the team.
        
  • You know what's happening in the team and rarely get "unpleasant surprises".
        

Supporting context

These things might be hard for you to actively influence but be aware as they do seem to have an important impact:
  • Reasonable pace
    People need time to do supporting, long term activities and when under immense pressure/unreasonably high pace this is quickly forgotten or down prioritized. These lost activities help you become faster tomorrow than today meaning they're long term, multiplicative investments.
         
  • Stable organization
    Adding or losing team members can in worse case force the team to start over in their attempts to be awesome. If you're the manager; try not to change teams that work great together even though it might be tempting!
        
  • Ethics
    A product you believe in and feel ethically good working with, helps. The same goes for the company's actions: If it feels like the company acts in an ethical way that seems to help people "invest" in the company in a way that's helpful.
         
  • Good social (especially empathic) skills
    Having team members who like the social aspect and are good at nurturing positive social behavior (not to be mixed up with people "talking a lot") helps.
         
  • Previous friends
    Not always true as the previous friends may create a "sub team" within the team but seems to sometimes help as the friends most likely have a healthy relationship towards each others which can spread.
         
  • Management accepting problems
    Having a manager/management asking for "solutions, not problems" can suppress people's willingness to bring attention to important problems or make the company accept suboptimal solutions. The intention to focus on what's constructive is not bad but the message delivered can be. It's of course okey to ask the person if they have any ideas themselves on how to solve the problems they bring up but don't make the solutions a "requirement".
         
  • Culture awareness
    Manager/management that genuinely cares about the company culture and how to improve it helps.
         

Misc

Some "quotes", all loosely translated from Swedish:
  • "I ask myself: How can I make this person feel like I want her to feel?"
    Carita Jansson Tsiantes
        
  • "It's professional to be personal"
    David Högberg
         
  • "It's not unprofessional to have fun but to do something in a boring way when it can be achieved just as well in a fun way, that's unprofessional"
    Klas Hallberg, from his book: YCDBRALAI (Swedish).
Finally a comment I didn't know where to place:
  • "If I say I can't talk about it, you know and accept this". Transparency is important but some information you mustn't share for various reasons. However, sometimes the mere knowledge you know some secret information can be enough to help people prepare for a big change, avoid unpleasant surprises etc. One example could be: "We will get a new boss, I know who it's most likely gonna be but I can't tell you until papers are signed; however, I can tell you I think this person will do a terrific job, so don't worry too much about it".
        

Conclusion

It makes perfect sense but didn't really occur to me when I first asked the question:

Making a team awesome is basically the same thing as making any relationship awesome and it starts with you and all the small decisions you make every day.

Good luck!

29 November 2016

Learning about learning by teaching

I've undergone a tough education in software testing:
  • 15 months long
  • Tests 3 times a week, 7 hours each and in front of a crowd
  • If you skip a test you'll have to do it again, typically within 5 days.
  • The expected level of competence is: "Good enough to teach others"
  • Little or no chance of "redo:s", you better do it right the first time, every time

In other words: I've been teaching a class in software testing.

The intense experience of teaching testing like this has of course taught me tons of things and with this post I want to share the positive effects this particular job had on my own learning. Each benefit (of teaching) comes with a "why I find it useful" and "how you can apply this knowledge in a more everyday context".

Benefit: Curriculum

Why?

I'm not a fan of committing to a plan, especially not when it comes to my learning. However, the education's curriculum did force me to look into many topics/material I would otherwise had skipped due to laziness, lack of interest or lack of understanding (not thinking it was useful to me). Some of these have definitely made me a more complete/skilled tester such as test framing, deeper understanding of test techniques and a better understanding of bias.

How?

Benefit: Go deep

Why?

I've read tons of articles, watched hundreds of hours of presentations/videos and spent a lot of time practically practicing testing. However, I often end up looking at topics in a quite shallow way especially when I find the topic a bit boring (may still be very useful). When you are to talk about a specific topic for just a couple of hours, you're okey, there's little "need" to go deep. When you have to prepare several weeks of study material though, that's a whole different beast! Being forced to go deep into topics has enables me to better question, improve, explain and argue for the various choices I make (for example why I chose to place my testing mission at the top in a certain status report).

How?

  • Dig deep into fundamental questions e.g. what is testing, why do we test, what is an oracle etc.
  • Look into related topics. Say you want to improve your written reporting skills then look into e.g. rhetoric, design, how people read printed and digital documents, how people interpret colors or tutorials for your word processor/mind mapping tool/whatever. The point is: don't limit yourself to articles specifically about test reporting.
  • Set a "topic of the month" and try to become as skilled as you can in this topic. Don't stop because you feel "done", continue beyond that.

Benefit: Giving feedback

Why?

An important part of my job is helping students understand what they do well and what they might need to improve. To do this I have to observe and analyze what they've done, what they think they've accomplished, what actually made them accomplish what they've accomplished etc., all this I have to do rather thorough in order to be able to explain it to them. This helps me create an understanding that is beyond "do this or do that because it works better".

An example of this is when grading various assignments and projects as students, at least on a general level, need to understand what they did good and what they would had to do to get a better grade. If they get the highest grade they need to know why, so they both know what to continue doing and what to improve. As testers we need these kinds of observation and communication skills all the time when working with developers, project managers etc.

How?

  • Study your own testing and try to explain why it was good and how it could be improved.
  • One area where I've found this pretty easy to practice (can't prove that the practice translates to other parts of testing but I think it does) is watching presentations (e.g. YouTube) and try to give feedback to the presenter. What specifically did she/he do good and bad?
  • Study other testers and try to figure out why you find them good/bad testers. Be as specific as you can.
  • When testing, try to find positive and negative patterns: "The log entries are (almost) always relevant and well described making my work so much easier" or "The UI components often have poor vertical alignment".

Benefit: Teaching

Why?

Teaching in itself is a great technique for learning. You have to rephrase the content to match your own language, you hear yourself speak about the topic and you get questions pinpointing gaps in your understanding and/or explanation.

How?

  • Do teach colleagues and friends about the various topics you've practiced.
  • Write an educational article/blog post about what you've learned (you don't need to publish it to anyone to still get many of the benefits).
  • Talk at a local test meetup and if there isn't one, arrange one.

Benefit: Peer

Why?

Working with Maria Kedemo and Martin Nilsson have allowed me to get feedback on the ideas I'm about to share, feedback on my interpretation of various topics and someone to speak with when I feel stuck. It has also allowed me to learn from their knowledge and experience of testing.

How?

  • Speak with a colleague
  • Join a local tester meetup
  • Go to a test conference
  • Join the active community on Twitter
  • Try your own version of Transpection Tuesday (my post, Helena's post: 1, 2)
  • More ideas...

Benefit: Observe testing

Why?

I've spent a significant amount of time both observing testers test (as a group), observed testers test (the actual testing done by an individual) and listened to testers speak about their testing. All three exposed me to new ideas and made me question my own approach. It's also interesting because you get to see a specific problem solved in many different ways which helps you understand what actually impacts the result; e.g. "what is the common denominator in these solutions, is there anything I can learn from that?" or "they all had different ways to setup but all ended up with the same solution, which setup worked best/most efficient and can I learn something from that?".

How?

  • Pair testing
  • Look at other testers' notes, reports etc.
  • Do call for and attend debriefs no matter if you use the concept of test sessions or not
  • Offer to review things
  • Volunteer to mentor/coach another tester; this will enable you to observe another tester as well as get several of the other benefits mentioned in this post

Benefit: Consistency

Why?

To sit down and learn about various topics every day for over a year has definitely added some welcomed consistency to my self-education.

How?

Benefit: Questioning basic assumptions

Why?

Explaining fundamental concepts is incredibly hard but rewarding! As an experienced tester I take quite a few things for granted and explaining concepts built on these assumptions to someone without experience lead to wonderful questions like "but why do we need testing at all", "what does it actually mean to test something", "why can testers find bugs if developers who know the code can't (as in why do bugs happen at all)?". Answering these questions without being able to rely on "experience based assumptions" has led to more than a few epiphanies (and a lot of frustration of course).

How?

  • Talk testing with people having a different frame of reference (developers, management etc.)
  • Talk testing with people who don't work in the industry; for instance try to explain what you do to a relative.
  • Teach new testers at the company or teach e.g. developers in testing
  • Talk testing with new, inexperienced testers

Benefit: Ask yourself "how do you train this skill"

Why?

Reading and listening is nice but sometimes you need to actually practice skills to be able to learn them. When teaching I've spent a fair amount of time trying to figure out exercises pinpointing a specific skill I want the students to practice or just exercises/projects in general helping students practice relevant testing skills. This experience help me now both when less experienced testers want help learning a skill, when I try to explain/teach something and when I try to teach myself something.

How?

  • After e.g. a blog post, YouTube video or book; think about how you can incorporate the new concepts you've just learned about into your own work.
  • Try various exercises and try to replicate various experiments yourself; such as: 1, 2, 3, 4; to help kickstart your brain.
  • Whenever you're asked to explain something; try to come up with an exercise or experiment that helps demonstrating whatever you are to explain.

Benefit: Getting questions

Why?

I've already touched upon this but getting questions from the students on anything that's not clear to them is incredibly challenging but rewarding. It has helped me realize flaws in my own understanding, forced me to question my own assumptions and challenged me to find new ways to explain certain concepts in.

How?

  • Explain concepts to others
  • Ask for feedback
  • Ask questions yourself; this both inspires others and help you ask questions "to yourself"
  • When reading/watching e.g. a book, presentation (video) or article; pause periodically and ask yourself: "what did I just read/watch and what of that is unclear/seems strange to me?"

Benefit: Having the time and expectation to learn

Why?

When in the middle of deadlines, huge backlogs and conflicting priorities it's easy to forget learning. Having the explicit expectation to learn new things has been an interesting experience and I feel confident saying I leave the teacher assignment as a much more competent tester. Spending as much time as I did on learning is not possible in most working contexts but I think "expectation to learn" is the key concept here as it helps making it happen at all.

How?

  • Ask your boss: "How much time am I expected (or at least allowed) to spend on education?"
  • When doing backlog grooming (if you do this); add learning stories as dependencies e.g. "before we implement the new video player we need to learn a bit about streaming, video formats and performance testing related to streaming". If you end up never having time for these learning dependencies, try timeboxing them to make the expected time invested in learning more explicit.
  • Remember learning is a fundamental part of testing.
  • Differentiate between the learning that's necessary to solve your current task and learning with more long term, strategic value (e.g. learning more about testing in general, about web security, about test planning etc.). The "strategic learning" is often important to keep you and the company progressing but can easily be forgotten if put in the same "budget" as the task solving learning.

Final word

I removed several additional benefits I had initially included just to finally get this blog post published (it's been stuck in "draft" for over a year) so just to be clear: You can learn so much more about learning by teaching; this is just a mere introduction.

21 November 2016

SWETish

It was quite a long time since I wrote about an event. This is mainly because "it's not new to learn new stuff" anymore so for me to write about an event it need to be pretty special. SWETish, a peer conference that just ended, was exactly that: Special.

I've tried to wrap my head around what made this different and it boils down to:
I've not had this many "epiphanies"/been surprised this many time at a test conference since... one of my first ever.

Next question is: Why? What made me get all those epiphanies?

Well, I've rewritten my explanation to that I don't know how many times now. It boils down to things I can't seem to describe well enough yet but I'll give it a shot in a separate post, probably after a Transpection Tuesday (I need your brain Helena).

So, let's skip to the good stuff: Content and lessons learned.

Credit

Before we start: Credit for the content below goes to all the participants of SWETish:
  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Talk 1: All Pairs by Morgan Filipsson

Morgan basically described a tool he thought was more or less obvious and most of us went like: "mind blown"... it felt a bit like Paul and Karen in the K-card story. It was just a simple non-mechanical combinatorial testing support tool made in Excel where you could decide the input values you would use in a test and the program would help you by showing how many valid pairs (as in "all pairs") you had not yet covered. I don't know if this is exactly what Hexawise, or some other tool, already does but to me it was ingenious.

But this is a peer conference so why stop there; open season time:
  • Forming an unbiased opinion can be expensive and money does matter
  • Getting many peoples biased opinions and comparing them can help sometimes
  • Beware of decision fatigue when using exploratory testing
  • Improving your test strategy or moving towards a more formal approach can delay the decision fatigue process
  • Mind maps are cool but do have limitations
  • Don't underestimate pen and paper
  • Excel is a pretty bad ass testing tool
  • Remember data can come in many shapes and forms and it often has complex relations to other data
  • Sometimes the simplest of ideas are not obvious to others
  • There are so many potential benefits with screen recording

Talk 2: How we test by Anders Elm and Erik Brickarp

A bit unorthodox but a two person experience report at a peer conference. The background was I read Anders' abstract and realized we have a very similar test process at Verisure. So Anders described how they test at SICK, I described how we test at Verisure by focusing on the differences and at the end we shared our key lessons learned and challenges. Long story short: We had both started with the goal to implement SBTM (in my case credit should go to Maria Kedemo et al.) but in both companies this had diverged into something else. I described this new process to be much more similar to TBTM than SBTM.

I might talk more about this in the future but let's skip to the open season for now:
  • How much of this diverging is due to strategic decisions and how much is due to laziness/lack of competence (valid question that requires more thinking on my part)?
  • Internal demos after each completed user story and mob testing during these demos seemed awesome (done at SICK but not Verisure, where I work)
  • We got an interesting question from Björn Kinell about "if you could magically change/correct whatever you wanted, what would that be". I don't want to share my answer but my advice: Ask yourself that question cause it can help when forming a vision for your testing.
  • It's easy to forget/skip debriefs, test planning and testing not immediate related to a story in the sprint but be careful as these activities often provide quite a bit of value.
  • Find activities that are "not testing but still testing" to easier get the developers involved. Examples: Add testability, "try" the product and support a tester when testing.
  • Ask the question "how do we catch <very basic bug> before the product is handed over to testers?" to start a discussion in the team about testing and developer responsibility.
  • Remember that small bugs that normally don't slip through development/testing can be a symptom of a much bigger problem like stress, change in culture or lack of support.
  • Time spent improving charters is rarely a waste of time.
  • SBTM "by the book" in a scrum team is not easy...
  • If the timebox aspect (session) is removed you need to find new stopping mechanisms and/or heuristics to help you stop and reflect on whether or not to continue.
  • Debriefs can be useful for many many reasons.


Lighningtalks

Johan Jonasson spoke about construct validity in metrics
Anna Elmsjö spoke about finding and making allies
  • Candy solves most people problems (my wife approves)
  • Finding allies among the developers is important to help you get attention to testing problems
Tim Jönsson spoke about the value to "knowing your audience" when reporting bugs
  • There are more ways than an entry in a bug reporting tool to report bugs
  • Do highlight helpful developer behavior/good code and show gratitude when developers help you
  • Good lightning talks often focus on one thing and explains that one thing well!
  • A compelling story helps your audience understand your message
  • Testing is about psychology more that you might initially realize
  • Be humble, be helpful, be useful... and be respectful.
Göran Bakken spoke about how integrity can get in the way of efficiency
  • Refusing to do bad work is not always constructive
  • Two different ways to approach a change is to focus on supporting values or to focus on methods.
Kristian Randjelovic spoke about how analogies
  • There are many ways to use analogies to help colleagues with limited understanding of testing understand otherwise rather complex (testing) concepts.
A general lesson for me (even though I didn't present a lightning talk):
If I'm given 10 minutes for my talk and open season, I'll try to aim for a talk shorter than 4 minutes and only focus on one message.

Talk 3: Test documentation in the hands of a stylist by Agnetha Bennstam

Agnetha showed three different examples of how she had helped turn massive 200-page documents into lightweight, often visual, alternatives. It's hard to explain this in text but the running themes were figure out what is important, find the right level of detail and aggregate data in a way that's useful/enough for the receiver rather than show the full set of data.

Open season:
  • Dare to try something different!
  • You can add additional dimensions to a diagram by using the size of the dot you're plotting and color (see slide 55 in Andy Glover's presentation).
  • Use feelings when you test e.g. "why do I feel bored" as indication that there might be something you should react to/take notice of (see Michel Bolton's presentation).
  • Asking for people's gut feeling can be a good way to get their assessment without getting all the motives and data (makes people relax a bit).
  • Sometimes asking for people's gut feeling can help them "dare to answer" so that you can start to figure out what is happening e.g.
    "How 's the quality of ...?"
    "I don't know..."
    "But what's your gut feeling?"
    "Well, so far it's not looking good"
    "Why is that...?"
  • Gut feeling can sometimes mean "very useful and relevant information the person simply cannot/dare not articulate yet"
  • ... but gut feeling can also mean, or more often be interpreted as, "I base this on absolutely nothing but I don't want to admit that"
  • Beware of documentation existing only for the purpose of being ammunition/defense in a blame game
  • A tool that could clearly visualize:
    How fresh the testing is (time and impact of changes since last tested)
    How important the testing is
    How well the testing went (typically quality)
    ... would be cool.
     

Other stuff

One more crazy thing about SWETish: All the talks and open seasons were amazing and that's despite the fact that the experience report all organizers voted as their favorite... was not delivered since Joel Rydén got sick!

Another cool detail. Anna Elmsjö added a nice twist to the K-cards she created for the conference: On each card there was a short helpful sentence/statement/word at the top explaining the card a bit. For instance sentences like "I would like to add..." on a yellow card or "This is the most important thing ever!" on a red card, most were funnier than that but the funny ones I saw don't translate well to English. To make it even cooler she had different statements for every single card... Johan, I expect the same for Let's Test 2017 ,)


Summary

THANK YOU! This was one of the best conferences I've attended, I already look forward to the next one!

... and I hope I can bring some of SWETish awesomeness to PEST, a peer conference I'll attend in just a couple of weeks (we seriously need to work on how we name the various peer conferences by the way...).

03 November 2016

Test plan questions

I recently gave a talk on test strategy and gave the participants a list of "test strategy questions" (which should be called test plan questions, adhering to these definitions). I think and hope these can be useful for more people so last in this post you'll find my most recent version.

How I use the questions

I use these questions as a quick way to form a test plan. The test plan might be everything from a high level, early plan for an upcoming test project to something small like the strategy for a specific test charter.

The way I practically use the list is pretty straight forward. I just run through the questions and basically strike out questions that are not relevant. Not relevant refers to questions like "any particular tours we should perform" when creating a high level plan, "what are the key test/release milestones" for a plan covering a 90 minutes test charter or "how do we report bugs and status" if this is already stated by our overall process.

The questions not struck out I try to answer; either myself or in a meeting with the people involved. Involved in this case might refer to anything from a second tester to everyone involved in a project. The outcome is typically:
  • I/we lack the necessary information or competence and need help answering this question
  • I/we have an idea but I/we still want help/input from someone else
  • I/we think I/we know the answer well enough but still want a second opinion
  • I/we think I/we know the answer well enough
The results can be turned into a formally documented test plan, some brief notes next to the questions, a mind map, an image and sometimes the discussion and actions taken are enough (no need to document anything). Do remember that a test plan document is not a test plan; it's a document describing some parts of your test plan.

A few additional notes:
  • The list should be improved to fit your context; add, remove and modify questions on the list accordingly (if you for instance know which bug tracker to always use, which most do, remove that question completely).
     
  • You always have to take time and value into consideration. For a minor issue with low priority it's probably not worth inviting everyone in the project as the cost of doing so is too high; Your own judgement is cheaper and probably enough.
     
  • The list can help e.g. project managers, line managers and developers think about testing concerns so do spread (your modified version of) the list to these people.
     
  • You can split the list into several smaller lists; e.g. one short list of questions to use when creating test charters and a longer/different one for project test plans.
     
  • It might be wise to think about what's important for a test plan before you even look at the questions. The point here is sometimes key details that definitely should be in your test plan might not be highlighted by any of the questions. If you start by using the list you might miss these key details (your thinking is limited by the questions). Once again I think value is important: How much time and effort seems reasonable to spend on this particular test plan?

Credit

I created these questions together with my students so credit should go to all the students involved. I also discovered testers before me had created their lists as well and I would not have thought about some of the questions below if it wasn't for their efforts.

So, before I present "my" version I want to highlight and thank both the students, Michael Larsen, Ministry of testingThe test eye and Michael Bolton. The four lists I just linked to, all come with their own twist/benefit so do check them out.

Questions

Notice some questions are generally only applicable when consulting, some only in an agile context, others when testing and development is clearly separated etc. Like has already been pointed out: Do remove or modify questions not relevant to your context.

Initial

I typically want to answer these questions before I go on with the rest, because these five typically impact my approach for the strategy in general.
  • How much time do we have? 
  • Do we need to document the strategy? Why? How will we document it? 
  • What’s the primary objective (testing mission)? 
  • Are there any other objectives? 
  • Where can we get the initial information about the product/feature/system we need?

Planning

  • What resources do I have available?
    people, money, equipment, licenses, time, expertise, facilities... 
  • Is this a realistic amount of resources for this project? 
  • How flexible is the time plan and what happens if the product is not “ready” by the deadline? 
  • What project risks exist?
    e.g. customers not clear about what they want.
  • What is our backup plan if risk X happens? 
  • What is our backup plan in general if everything fails? 
  • What is likely to change?
    e.g. team setup, requirements, developers' focus, delivery dates… 
  • Any meetings we should attend/discussions we should be part of? 
  • How do we handle handovers/new members/lost members? 
  • Who does what?
    e.g. responsibilities and roles 
  • Any known problems along the road? 
  • Are there any workarounds or solutions to the known problems? 
  • Any regulations, rules, standards, certifications etc. limiting us or forcing us to work/act in a specific way? 
  • What administrative tools are (almost) enforced and what else do we need/benefit from? 
  • How do we plan the everyday work? 
  • What are the key test/release milestones?
  • How flexible is the scope - can the project be down-scaled if some unexpected problem happens?

Prioritization

  • What is most important (to test)? 
  • What is not important (to test)? 
  • What can be skipped all together? 
  • What quality characteristics are most/least important? 
  • Any specific aspect of these characteristics that is more/less important? 
  • What is covered by other teams? 
  • How do we continuously verify we’re doing the right thing? 
  • What is our done criteria
    e.g. strict deadline, customer acceptance tests or some other assessment of "good enough quality" and if so, by who? 
  • What's the general requirement for quality?
    Aim for the stars/critical system or “just don’t explode... too badly”

Information

  • Where can I get information about X and who do I contact if that's not enough? 
  • Which claims exist? 
  • Which customers exist, can we contact them? 
  • Who can answer questions and which questions? 
  • What is still unknown/not clear about the project? 
  • How do we simplify the complexity?
    Simplify the process of learning the product. An example might be "what sort of visual models would help to create?". 
  • Any particular tours we should perform?
  • Are there any general guidelines to how we deal with learning activities and knowledge sharing? 
  • How do we further inspire/reassure/encourage feedback, reviews and knowledge sharing? 
  • How do we stay up to date with what is happening in the project?
  • How do we communicate with various information owners?
    e.g. email (address), phone (number), instant messaging tool, via other person, meeting etc.

Support

  • What kind of built in testability will help testers?
  • Which modifications can/must be done to the process in general to support the testing? 
  • What do we need to learn more about? 
  • Any particular configuration or test data we can prepare? 
  • Which tools can help us? 
  • What other teams should/can/must we cooperate with? When, how and why? 
  • Do I know who the developers are and can I talk to them? 
  • Do the developers have time allotted for us? 
  • Are there any problems getting their time/getting in touch with the developers? 
  • Will they keep working on this?
  • What will the developers test? How does this impact our testing? Can we make suggestions regarding their testing?


Testing

  • How do I tell right from wrong?
    Find potential oracles 
  • Which oracles are generally most trustworthy? 
  • What testing risks exist?
    e.g. unstable test environment or lack of knowledge about something.
  • Which test techniques might be useful? 
  • What expertise do we lack? 
  • Which scenarios/typical use cases exist? 
  • Which heuristics might be useful? 
  • What logical components/modules are there?
  • Is there anything I'm not allowed to do?
  • Any (testing) tips the developers can give to the testers?

Product

  • Which product risks exist?
    e.g. complex algorithms likely to be buggy or new technology used.
  • Is there any complexity we might be missing? 
  • Which functions will the system/application/feature have? 
  • Who’s the target audience? 
  • Which platforms, systems etc. should the product support? 
  • What requirements exist for this product? 
  • What problem is the product expected to solve? For who? 
  • What problems have happened in the past with this product? 
  • Any existing functionality that is impacted? 
  • What must the product never do?
    e.g. any data sent as plain text is strictly forbidden

Reporting

  • What do I need to cover and how well? 
  • How do we track and visualize coverage and progress? 
  • Which stakeholders exist?
  • How do we report bugs and status? To who? Why?
    "Why" as in: Which problems/questions will the receiver hope to solve/answer with our report. 
  • What other artifacts/information do we need to share? To who? Why? 
  • When do we need to report what?

Additional sources

28 September 2016

Next step

After a colleague pointed out I'm not 29 years old anymore I had to revisit my About page. While reading the rest of the text this sentence made me stop...

Next step is to spread the knowledge I've obtained and be a thinking practitioner who can hopefully do what James once did for me.

I wrote that sentence over 4 years ago, before any tester outside Linköping knew I existed. So I took a moment just to reflect.

"Spread the knowledge I've obtained"
I've been teaching students in Software Testing at a vocational university for 1,5 years, I've given numerous lectures, presentations and workshops on various topics at conferences (national and international), meetups and at my workplaces and I'm now test coach at Verisure, a role in which I'm basically paid to share the knowledge I've acquired so far. Finally I've, for most of the period, been a fairly active blogger.

"Be a thinking practitioner"
Transpection Tuesday, my blog, my peer conference appearances, my many dinner discussions with top notch testers (most recently Göran Bakken), my (former) activity on Twitter all add to the "thinking" part while my testing and test related experiments at Ericsson, Verisure and Zenterio, as well as my effort during Software Testing World Cup all add to the "practitioner" part.

Most important though: The two are not separate processes. The best example of this is probably Transpection Tuesday. During Transpection Tuesday, Helena and I often discuss a challenge one or both of us have, together we work out potential solutions or experiments to run, we go back to work to try these solutions/run the experiments and finally we share/ask for help to evaluate the results at a second Transpection Tuesday. Like I said, one process.

"Who can hopefully do what James once did for me"
After graduation I got emails from two former students, both made me feel I've accomplished exactly this...

... hmm, is it raining inside or why are my eyes moist all of a sudden...

On top of that other former students, testers and other colleagues (most recently a developer) have all helped me understand my efforts to inspire, guide and mentor have actually made a difference.

It's not without pride I say: In four years I've wildly exceeded my own expectations based on my memories of what I hoped to achieve 2012. Not just my expectations for the next four years but potentially for my whole career. Shit... I need a break.

... pause music...

What's my "next step"

Took quite a bit of thinking but here's my updated "next step", or mission if you prefer that:

I've presented to, spoken with, inspired, coached and mentored testers in testing and quality. I want to continue this but to a broader audience, in ways different from what has been done before and inspire others to present, speak, inspire, coach and mentor as well.

Clarification:
Broader audience refers to e.g. developers, students (not studying testing), managers etc.

If you happen to read this in few years, do remind me to report my status, please.

What's your "next step"

Enough about me, what's your next step?

24 August 2016

How to come up with test ideas

Intro

This post is an attempt to gather the various tools, tips and sources I've found useful to improve my ability to generate test ideas.

Notice that the various concepts described are huge and I will only briefly describe them in this post, it's up to you to learn more and put them into practice. A good start would be to check out the many articles I've linked to.

To developers
Important! The techniques and sources I will describe are just as useful when developing, example:

"This will create a separate log file, what do I need to take into consideration when that file is created..."

If you want to develop high quality software, learn the fundamentals of test design!

Common product risks

Risk catalogs
Risk catalogs are lists of generic risks (things to test) in common components/functions.

You will of course need to judge what's relevant in your context and risk catalogs aren't flexible the way actual thinking techniques are but they are still incredibly useful:

Quality characteristics
Quality characteristics, also known as quality criteria or quality attributes, are an attempt to break down quality into meaningful/helpful categories. When generating test ideas these categories help us see the product from different angles.

Generic risks
Some aspects are (almost) always worth taking into consideration when assessing risk e.g. "is this new functionality?" or "does this module have a history of being buggy?". I can't say this information helps me generate new and creative test ideas but it helps me focus my effort on risky areas thus still relevant.

Test techniques

Test techniques are sort of "compressed tester mindset". The set of 9 techniques defined in the heuristic test strategy model is my personal favorite but there are many different sets out there.

Notice that each technique is way too broad to cover in this blog post but if you want to "learn to think like a tester" I urge you to learn more about and practice them!


Function Testing
Identify and test each function individually.

I prefer to begin with creating a list or mind map of all functions I can identify in whatever I'm about to test. A function in this case is anything the application can do e.g. save a log message, identify that a new version is available or show an error message. To do this, touring is a useful practice described in the links below.

When you've identified the functions, focus on each one individually, example:
"How can delete entry be tested? Successful delete, try to delete an already deleted entry, try to delete when lacking access, stack deletes of the same entry, lose connection to data source when deleting, modified delete request (e.g. SQL injection), potential timing issues, delete different entries in rapid succession, ..."

Domain Testing
Identify the input and output data in the application then look at which of the possible values/combinations of values you should test.

After identifying the variable (input or output) to test, you first group the possible values into equivalence classes. The idea is any of the values within each equivalence class should in theory behave the same. The next step is to select one or more values from each class to test. To do this you for instance look at boundary values, high risk values or realistic values. Finally when several variables need to be combined (such as in a typical form) there are various combinatorial techniques to help you find an efficient way (as few tests as possible) to actually combine these values into individual tests with the level of coverage you're aiming for.

Notice! When reading about domain testing you might get intimidated by all the paper work involved: "I don't have time to do all that for a simple form...". For me the detailed analysis on paper is sometimes useful, especially to practice my domain testing skills, but generally I just keep the various techniques related to partitioning (creating and grouping values into equivalence classes) and value selection in my head and apply them "on the fly". Still: I do urge you to sit down and create e.g. a detailed boundary/equivalence table at least a few times because it's key to actually understand domain testing.

For the combinatorial analysis: Remember there are excellent tools!

Stress Testing
Identify various bottlenecks, input that can be sent in huge volumes or that require more computing power than usual, dependencies, communication to external systems etc. and ask yourself how these could become a problem for the application (e.g. flooding of  requests, dependencies lacking, slow connections/timeouts/lost packages). Also look into the system's ability to restore itself after collapsing (e.g. will the system come back up after a database crash).

Do notice that stress testing often refers to testing the behavior when actually going beyond a system's breaking point. This is important since if you search for "stress testing" you'll mostly find just a subset of what the test technique "stress testing" is suppose to cover. Try also searching for e.g. "robustness testing", "load testing" and "performance testing".

Flow Testing
Testers often reset environments between tests, especially if using test cases, by e.g. closing the application or only focusing on one step in a long flow. The problem: Users don't. The test technique flow testing is thus about focusing on what will happen when a continuous chain of events occur. One way is to actually identify a long, realistic flow (e.g. a "buy, cancel, refund" cycle in a web shop) and test this, another is to not reset between tests (e.g. not close the application or reset data for several days while testing).

Scenario Testing
Scenario testing is, unsurprisingly, about creating a scenario and test this scenario. One important benefit with scenario testing is it can identify gaps in the implementation that other techniques generally don't, as they focus on what has actually been implemented.

Claims Testing
Also known as "specification-based testing". The idea is to identify the claims made about the product and check if they are actually true. Claims can come from a variety of sources e.g. specifications, requirements, manuals, various in-app help sources, public statements or the EULA.

User Testing
User testing is about asking yourself "how can I use users to improve my testing":
  • A user attempts to complete various tasks in the application and walks you (the tester) through how she/he is reasoning. The tester's job is to identify problems the user runs into e.g. not understanding certain parts of the navigation.
     
  • Use user data to drive the testing e.g. "the most items a user has ever ordered is 104 and in our tests we never order more than 20, we should increase that amount".
     
  • Personas and other tools helping you "simulate" a user can be used when testing, example:
    "Looking at this from the perspective of 'Nancy, the power user', I realize we lack keyboard shortcuts in much of our application"
Do notice that user testing often refers only to the first bullet above. This is important since if you search for "user testing" you'll mostly find just a subset of what the test technique "user testing" is suppose to cover. Try also searching for e.g. "personas", "ux testing" and "usability testing".

Risk Testing
Identify risks and design tests to cover each one of them.

These risks can either be identified:

... inside-out (ask: what could go wrong here?), example:
"We now allow users to delete their private messages, how could that be a problem? Well what if a user could potentially delete another user's messages? How can we test if that's possible to do?"

... or outside-in (apply common risks to the product), example:
By using Michael Hunter's "You're not done yet" I realize saving the configuration to a read-only folder could be a problem.

The chapter "Common product risks" includes tons of sources useful when applying the outside-in approach to identify risks.

Notice that "risk-based testing" (prioritize testing that's considered the greatest risk first) is not the same as the test technique "risk testing", but will be your most common result in a web search. However, articles about risk-based testing typically covers tools and techniques to identify and prioritize risks making them useful anyhow; just be aware of the difference.

Automatic Checking
How can you use tools you know about to improve your testing?

Do not confuse this with "automation", "automatic checking", "automated testing" or whichever term you prefer, which is a way to actually execute the ideas identified and not a test technique itself; even though unfortunately sometimes used as some sort of technique.

Other useful resources about test techniques
  • What is a good test case, by Cem Kaner
    More information about some of the test techniques above as well as some additional techniques.
  • Lessons learned in software testing (book), by Pettichord, Kaner, and Bach
    The chapter about test techniques is available online as PDF as part of the BBST course. I very much recommend you to get the rest of the book as well.
     
  • BBST Test design, by Cem Kaner and Rebecca L. Fiedler
    BBST presents a broad set of test techniques. Highly recommended reading in general and for test design in particular.

Critical and lateral thinking

Everything else in this article is more or less applications of critical thinking and supported by lateral thinking. They're thus the most abstract but generally applicable concepts I will describe.

Critical thinking
Definition:
"The objective analysis and evaluation of an issue in order to form a judgement"
Oxford Dictionaries

Compare this to e.g. James Bach's definitions of software testing:
"Questioning a product in order to evaluate it"
... any similarities?

A more in depth definition of critical thinking has been made by The Critical Thinking Community.

When developing software we make massive amounts of assumptions, simply to save time, example:
This framework will prevent..., this is what users want, this is the simplest way to...
Critical thinking is essentially the skill needed to identify these assumptions.

Learning:
I also designed a critical thinking exercise for testers, which might help.

I would love some more (and better) sources on critical thinking, so please drop a comment with your suggestions!

Lateral thinking
Lateral thinking is simply put the skill to think outside the box.

Definition:
"The solving of problems by an indirect and creative approach, typically through viewing the problem in a new and unusual light. Contrasted with vertical thinking."
Oxford Dictionaries

Reading material:
Exercises:

Domain knowledge

To be able to form any test you first need to, at least on a conceptual level, understand the product and its context.

I for instance like to think I'm pretty skilled with test techniques, that my critical and lateral thinking skills are decent and that I know how to get some good value out of tools such as risk catalogs. Still, when new to a product I'm not very effective until I've actually learned the product.

Stuff you can do to quicken the learning of a new application and its context:
  • Ask for a demonstration
  • Look through available documentation (start with documentation intended for customers)
  • Make a visual model of the application, enhance the model as you learn more
  • Tour the application
  • Ask for help to make a SFDIPOT analysis of the application
  • Learn about and try out the tech the application relies on
  • Speak with stakeholders to learn about e.g. the application's purpose and customers
  • Start somewhere and slowly work your way outwards, to avoid getting overwhelmed
  • Observe users, using the application
  • Pair up with developers, testers and others working with the application
  • Ask questions... lots of questions
  • Look into, and apply, various learning techniques (search for "learning techniques")
  • Learn about relevant laws, regulations, standards, certifications and business practices
  • Speak with domain experts, sales people and customer service
  • Look at test documentation available for the application

Other tools

Test framing
Test framing is about being aware of what you're testing and why, or, as Michael Bolton describes it in the link below: "link each testing activity with the testing mission". I find this useful to practice as the awareness, in my experience, improves my ability to see potential risks, better focus on relevant test design and it actively challenge my critical thinking skills.

Opposites
When you have a few ideas to test it's sometimes useful to look at those and ask:
What's the complete opposite of these ideas and could that potentially be a concern too?

Examples:
What happens when several entries fit a delete condition?
Opposite: What happens when no entries fit a delete condition?

What if the network has limited speed, delaying requests?
Opposite: What happens if the network floods the application with requests?

How will a user with limited computer skills approach this?
Opposite: How will a tech-savvy user approach this?

Great additional sources
This is a collection of great test idea articles I've used for this article and/or when generating test ideas that just didn't fit into any of the other chapters:

Please help me improve this

Initially this was intended as "just another blog post" but it sort of took a life of its own and became a pretty big reference page.  I always appreciate comments but since I expect people to assume this is "the truth" more than usual, I will appreciate corrections and suggestions even more!

17 August 2016

PSL alumni Slack group

Problem Solving Leadership is a quite different course in that the time in Albuquerque is just a start rather than the learning experience itself. For this reason a Slack group has been set up to facilitate learning between PSL graduates from different classes rather than just one class as at a time; which is normally the case.

How to join the Slack group
To be able to register you first need an invite (this is how Slack works). Use the following form and I'll send you an invite:
http://www.brickarp.se/psl

If you feel like it's taking too long; feel free to contact me via Skype (@brickuz).

Important! I cannot control that every single person joining has actually attended PSL but beware! Joining without having participated will:
  1. Likely hurt your own experience when/if you attend.
  2. Not provide you with very much since you don't have any knowledge about the simulations referred to.
If you're a graduate and like the initiative I would very much appreciate if you spread the knowledge about this group to other participants in your class or other PSL graduates you know.

If you want a very brief background to who else was involved in starting this group, check out this Twitter thread.

Finally: This group is in no way limited to software testers, I just happen to be a software tester and my most efficient way to reach out is via my blog.

Take care and remember to watch out for the sheriff...

07 June 2016

Time estimates in testing, part 2

Intro

Before I begin: Thanks to feedback from especially Jari Laakso, I've found several flaws in my first post which was what I hoped for when I published it as the flaws help me understand estimation better. I will highlight some of them in this post but I also want to point out that part 1 has been updated several times since I first published it to clarify or correct content.

How to interpret the graphic below

Do notice that these are not polarities where you switch from one to the other, instead they are continua. I will however describe one rather specific scenario for each box for the sake of simplicity, a chance for you to see the differences between "each corner". My hope is these examples can work as inspiration/help for you to think different.

One final remark: I will provide all these examples as cross-functional scrum teams consisting of only developers and testers. Of course this is just one of many ways a team can be set up but I'll use it once again for the sake of simplicity.

1 - Estimate testing stories

Separated planning with detailed estimates.

In this scenario testers create their own stories, estimate them and get them prioritized as part of the regular scrum flow (the cross-functional team will now have a mix of stories relevant to developers/testers/others). This allows testers to create their own plan optimized for testing needs and I find it easier to motivate developers to help out with testing as needed as the only content left in the sprint might be test work (and this is very much visible) but...

Since planning testing even a week forward is rather hard in most scenarios (open bugs blocking stories, open bugs delaying stories, late code deliveries, level of quality differs from the expected etc.) I find this approach somewhat chaotic (even though you might expect the complete opposite).

A second challenge with this approach is the risk of creating a gap, within the team, between testers and developers. This, I think, is because if you actually try to follow the plan (test stories) you will lose some of the flexibility to drop something in favor of something else in order to minimize the feedback loop between feature implemented and feature tested.

Finally my experience is it takes a certain skill as a tester and/or a certain group of developers to actually make developers engage in the test part of the sprint planning, further increasing the risk of a gap between testers and developers.

For me, session based test management is an approach to exploratory testing worth looking into if you want a separate test planning, with estimates, without making it too expensive.

If you choose this approach, I would strongly recommend you to share sprint plannings, backlog groomings etc. so that everyone sits down together. It's easy to "optimize" so that testers have their groomings or that developers don't engaging in those, which is like asking for a disconnect to happen.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The project manager/product owner/manager/test lead who pretends/thinks testing can be controlled.

2 - Estimate dev stories

Shared planning with detailed estimates.

In this scenario testers and developers share stories (DoD includes both implementation and testing). A story is estimated both in regards of testing effort and developer effort for a combined estimate. A story is considered open until "someone" decide that the implementation, testing and bug fixing is done. That "someone" is typically the concerned testers and developers together but if they disagree or the estimate turns out to be way off, a product owner or similar might have to step in.

My experience is that this fosters a very close bond between testers and developer which typically has a positive effect on e.g. tester status, information flow and code related testability. Other benefits are developers have a better motivation (and help) to test themselves as their work is not over until testing is "finished".

However, since the testers are normally the minority it's easy they become the ones always losing votes and debates which may impact how they test, when they stop testing etc. and often not in a good way. Also since the testing is so closely tied to the development things like integration testing, general quality assessments (e.g. security, performance, consistency, test legacy for support of changed context etc.) and regression testing may be missed (regression can be helped by developers taking a more active role in creating automated smoketests but that's just a part of it).

One last problem is if you actually create new tester-developer hybrids and the pressure builds up (important release, budget cuts etc.) teams might trick themselves by moving more members into developing and tamper with their normal quality standards (which in a few cases might be the right choice but often not in my experience, often because the "temporary need" becomes permanent).

If you choose this approach, I would strongly recommend you to introduce activities such as system tests or release tests. In this activity, also involve developers as part testers, part bug fixers, so that a gap between developing and testing isn't created (testers not being able to provide feedback quick enough in the new sprint). A different method is to have teams solely dealing with integration testing (more waterfall like/support team) but for this to work there has to be strong information channels established between the cross-functional teams and the integration test teams.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The R&D manager/developer who doesn't really understand there's a need to test anything else than the code that was just written or the developer who rush the testing part since "that's not that interesting".

3 - Follow development planning

Shared planning without detailed estimates.

Very similar to the above but without time estimates of the testing. The difference that happens is you save a bit of money (less administration) but testers are a bit more detached from the development process (but not necessarily from the developers) and if estimates can actually be somewhat accurate you lose that as a planning benefit/potential delay indicator; in this scenario sprint plans assume testers will manage time limits if developer manage these which is of course not always true. But do remember that estimates are often not that accurate to begin with (described in part 1) so the actual effect of no test estimates might be small, personally I would most often worry more about the potential disconnect between testers and developers.

If you choose to approach testing like this, I would strongly recommend you to closely monitor so that testers participate actively in all the typical planning activities (groomings and sprint plans). This is to help build a team where testers and developer keep working closely together. The goal is to  "just" remove the part where testers perform time estimates, nothing else, in the planning meetings.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
Testers who simply dislike administration and don't even care to try estimates. A reason why this might be very troublesome is also because these testers could easily contribute to the disconnect between testers and developers in that they might want to use the removal of estimates to skip out on "irrelevant meetings" where key implementation decisions are taken.

4 - Report on/off track

Separate planning without detailed estimates.

In this scenario testers attend backlog groomings and sprint plans to stay up to date with the planning and the product as well as get attention on testability needs; but they have a complete separate planning done together with developers (costs time) or with testers only (may hurt developers' understanding of/respect for/interest in testing). From a project planning perspective testing "just exists" and whenever the team suspects there's a risk for the testing/bug fixing to be late this is reported and dealt with just like any other case where estimates are wrong or development delayed.

This separate planning would happen as soon as possible after the sprint plan is complete (so that the scope for the sprint is known). Testers discuss if there are any other activities they should plan other than the regular testing of stories. They also look into if stories should be tested together to avoid costly retests. This makes it easier to take testing needs into consideration without messing up the development planning and testers can make rapid changes in their plan without impacting development stories.

This has some interesting benefits but there's a great risk of testers forming a subteam within the team potentially hurting the developers interest and effort in testing (information flow should be less of a problem since groomings and sprint plans are key to be able to do the test planning).

If you choose this approach, I would strongly recommend you to leave a bit of time open for unplanned as bugs not related to development stories are more likely to be discovered and help testers need might not be taken into consideration when planning and thus that extra time has to still exist when testers need help. This might seem stupid but if you've been in a software project you know some sudden needs from testers (blocking bugs, needed support tool updates etc.) are hard to predict at the sprint start.

Finally the name for this is approach is a bit strange. If we're not estimating testing (at all) we can't possibly say if we're on track or off track. I thus want to again stress that in reality we always make estimates but rather than debating whether one of many tasks should be estimated to 3 or 5 hours we look at the whole picture and make a rough estimate if we'll make it or not in time.

If you choose to approach testing like this, be careful. Putting testers with low communication skills, reporting skills or unwillingness to make work transparent, in a team like this can lead to isolation and a negative mythic image of testing where developers neither really understand nor care about testing ("they don't seem to care for the planning and the team so why should we care about them").

Examples of two dangerous archetypes potentially advocating for this approach for "the wrong reasons":
Waterfall accustomed testers or developers who still haven't figured out how to work with testing in scrum and other agile approaches. Also testers who don't want to/have the ability to make their work reasonably transparent.

The actual implementation

In a real life scenario we move a bit in between two or more "squares"; for instance testers will sooner or later need to do work that only benefits testing, so completely following the developer's stories is a myth (or an example of a dysfunctional team). However, testers not feeling like they are allowed to test things outside the current development scope e.g. because of a big pressure to close stories, is something I've come across multiple times and with that said my goal is to highlight options. The examples in this post is meant to help you see alternatives; they're not models, ready to implement.

My hope is part 1 helped you understand the dynamics of estimates and planning while the second gave you some practical ideas to experiment with. Like in part 1 I want to emphasize that this is still thoughts in process but I can't wait forever to share them if I want to get feedback...

Finally: Thank you once again Jari Laakso for all your feedback on the first post!