30 July 2014

An extensive list of what helped me present

Dealing with anxiety
  • Rehearse, rehearse, rehearse... rehearse. I thought rehearsing too much would make my presentation sound like a "script" but I was wrong. The parts I had rehearsed the most (and I rehearsed a lot) was probably the ones I delivered the best. The reason, I think, was I could relax and only focus on delivery.
  • I skipped the session before mine and instead went to my room and gave the presentation to the bed, chairs and curtains (in other words, no crowd). That really helped me relax. Also I noticed in my recorded rehearseals that when I gave the presentation a few times in a row, the first time was usually the worst. So I figured rehearsing once right before the actual presentation would make the actual one better and I think it did. If I interpreted Clair Moss, who delivered a kick ass presentation as well, correctly, she seemed to have had a similar strategy.
  • It may sound weird/impossible but I decided it wasn't that big of a deal to present at CAST. I think this was simply a matter of convincing myself by repeating the idea over and over again. When I was actually there it seemed to work cause I stayed a lot calmer than expected and never really felt that "this is insane" feeling.
  • I found it important to remind myself I presented to impress myself, nobody else. That made the whole thing a little less stressful as worst case suddenly became me being disappointed in myself rather than half the testing world being disappointed in me.
  • Smile! It's weird but it really eases stress (for me)!
  • I found there is a huge mental difference between "I'm scared and nervous, but I can do this" and I'm scared and nervous, I don't want to do this". Once again, repeat to yourself until it sticks.
  • What if I lose track? Once again: rehearse and it gets less and less common! Rehearsing also helps in case you actually do lose track as you can much easier find your way back (losing track is not necessarily a bad thing by the way).
  • Being a presenter is a possibility and only a possibility. I strongly believe (don't correct me, it would make me a lot more nervous .) people rarely remember a bad presentation but good ones stick. As a presenter those remembered presentations open doors, lead to insights (and good in that sense could mean an utterly failed presentation you learned a lot from), help you connect with people and raise your confidence. Bad ones are forgotten by everyone else so learn from those presentations (make them good) or forget them like everyone else.
  • I try to always get to the location where I'm about to speak before my audience. For me it's a mental thing; if I'm there first it feels like they enter my turf, if I'm not it feels like I'm on someone else's and the former just makes me less nervous.
Dealing with being "in shape"
  • Eat, drink (water!) and sleep. Sounds simple but easy to miss, especially when you're getting nervous.
  • Beware of jet lag, try to get there a few days early to adjust.
Dealing with language
I'm not a native English speaker so I had an additional challenge.
  • For every day spent in the US my English got better and better. So arriving a few days before the conference was a great.
  • Rehearse, rehearse, rehearse!
  • I found it much easier to spot weird things I said when listening to them in retrospect, so recording my rehearseals really helped improving my language. I didn't ask any native English speaker to listen to my recordings and comment on typical linguistic errors I make, but that could had been useful.
Dealing with creating the content
  • I took on a way too big topic in the first draft of my CAST abstract, the one I finally sent in was focusing on only one of seven parts described in the first draft and still I had to shred a lot of content and could even had focused the entire presentation on one of the four parts I talked about. So beware of too broad topics!
  • Peers are invaluable! Having someone reviewing my slides, content and even a rehearseal was key. The feedback I got from Helena was invaluable! And without Maria's feedback when preparing my abstract, I can't imagine I would had been selected to talk!
  • Questions I asked too late that forced me to basically rethink my whole presentation was: What's in it for the people attending? Why should they be there? What do I want them to remember? What do I want them to feel? How can I make that happen? Before I asked those questions the presentation was fairly focused on me and what I thought was interesting not what I thought would be valuable to someone else.
  • Rehearse! It's the only way to see if the amount of content fits the time given.
  • Keep adding content all the time! It's much easier to shred or compress content to fit a time slot than to fill gaps.
  • Not allowing my ego to take content decisions helped me a lot. People don't care much about how great I am but they seem very curious about my mistakes, embarrassments and problems.
  • My topic was sort of an experience report. The great thing about experience reports is you're sitting on all the facts and information. That also fights anxiety as "I know what I experienced and if your experience differs we can discuss it but mine is still valid (just like yours)" thus, you're always right as long as you stay truthful about what you experienced.
Dealing with delivery
  • Rehearse, rehearse, rehearse!
  • I recorded myself rehearsing and when I listened to the recording for the first time it just blow my mind! I thought I was varying my tempo and volume as I was presenting but realized ... I wasn't. My voice was monotone, I sounded uninterested and pauses were non-existent. Next time I will record video as well. (didn't video record any rehearseal this time, simply due to discomfort and laziness). If you feel like recording yourself is complicated, think again! I used basically the first free app I found for my smartphone and it worked beautifully!
  • I forced myself to try very long pauses just to see the effect. That helped me realize the power of pauses (even though I felt I forgot that a little bit during my actual presentation). And the power? Personally I feel a well timed pause can help attendees digest my information and be ready for more and no pauses is a bit like reading a text with no space between paragraphs; it's exhausting and you lose some valuable structure.
  • Listening to myself also helped me identify where I tended to ramble.
  • I have three kids. As a way to prepare myself I read or made up stories for them and tried to tell them with as much energy as possible. Good way to practice storytelling.
  • Actively using my body (using my arms to express something, move around, shrug, express what I'm saying with my face etc.) helped me "get excited"/get in my presenter mode.
  • During rehearseals I sometimes found myself speaking faster and faster. The most efficient way I found to battle this was to just stop, make a long pause and kind of "reboot".
  • As I rehearsed, I experimented with different ways of describing things, change order and vary the tempo. For me that led to some interesting insights, mostly related to pauses (already mentioned) but also how I could add energy to certain parts by raising the tempo/volume or better emphasize certain parts by saying keywords slower and and with a different volume (sometimes louder, sometimes softer, depending on context).
Dealing with remembering the content
  • Rehearse, rehearse, rehearse, rehearse, rehearse, rehearse!
  • Rehearse without slides
  • Rehearse without any other tools (private notes etc.)
Dealing with making people come
I'm not much of a marketing person but...
  • I wrote a couple of blog posts about the presentation. [1, 2]
  • I spook with people at the conference.
  • I spoke a bit about it on Twitter before
  • I didn't do like Huib Schoots but god I wish I had! [1, 2]
Dealing with technology
  • Rehearsing without the slides a lot helped me prepare in case the technology would fail me. Also I think doing this helped me to, with slides, look less at the slides while presenting.
  • I booked a conference room for 1 hour every morning during the last 2 weeks before the conference. Being able to present with all the technology in action really helped, for instance I realized my computer was not well configured for attaching an external monitor. Also I got to test switching slides using my wireless mouse which took a little practice to get right (damn you scrolling wheel!).
  • Triple check technology... and then check it once again.
  • Oh, and US don't have European power sockets, luckily I thought of that.
Some CAST (CAST-like conferences) specific insights
  • Everyone is there to learn so everyone wants you to succeed! May sound a bit strange but I've rarely felt more support as a presenter than I felt during CAST.
  • K-cards may seem like a scary thing but in reality they are a great tool for me as a presenter to get valuable feedback and learn. Realizing that made them a lot less scary. The open season helped me understand if people liked the material, what parts seemed more interesting, things I might need to tweak/think more about, I got new ideas on how to improve the content etc. That gave me comfort!
My Top 3
  1. REHEARSE! Thank god I spent so much time doing that!
  2. Record yourself and analyze the content.
  3. Presenting is an opportunity/privilege, not a punishment (usually), so don't make it so!
Last word
I wrote this post just under a year ago, on my way back from CAST. But I never published it as it seemed kind of silly for a first time conference speaker to share advice on how to prep an awesome conference talk. A few weeks ago I picked it up again and since it still made a lot of sense I decided to share it.

However, I would love comments from both experienced and inexperienced speakers on what I got wrong (in your opinion) or maybe even right (in your opinion), just to improve both this post and my own ability as a speaker.

Take care!

27 July 2014

Peers

What is a peer
Definition:
A person of the same age, status, or ability as another specified person
(The Oxford Dictionaries)

The kind of peer I will be speaking about is other people within the testing community striving for excellence in a similar fashion as I do and preferably on a somewhat similar level even though that's hard to compare.

We love to team up
Everywhere in life we team up. We get partners to share our everyday life with, we get friends to whom we share similar stuff, we get other friends to share more specific things with such as a hobby, we get close colleagues to discuss work with and so forth. All these people help us as advisers, complement our skills, support when things are shitty, support when we need to challenge ourselves, idea generators, inspiration and many other things, in their respective areas.

A peer in testing to me is the same thing but from a general career/tester perspective. They add an external perspective (or at least second when they happen to also be close colleagues), they help me develop, they help me challenge myself, they help me solve problems I have that require testing or test related knowledge and they can help me when I want to/have to change job.

Does this sound a bit abstract? Let's get more specific!

Help solving a (testing) problem
A question that has been nagging me for a while now is: "What is my role?" or rather "What do I think my role should be?". The problem is I don't really know and I feel I'm stuck when it comes to figuring it out. During Let's Test recently, among many things, I had a 2 hour discussion with Anna Royzman and Andrii about the changing role for testers. It was a great discussion where both I and Andrii had similar questions and together we came to insights like "everyone comes with their own unique mix of skills/experience/knowledge and we have to be careful not limiting our potential and usefulness to the role's description, people's general expectations of us or the role we once had/the person we're replacing". That's one example of how peers (Anna and Andrii) helped me get a better understanding of/solve a test related problem.

Challenge to improve
I'll save my self some work by just pointing at my story of how I became a speaker at CAST. It perfectly illustrates how a peer (Maria Kedemo) challenged me to do something outside my comfort zone in order to help me improve.

Outside perspective
I use my super peer (Helena Jeret-Mäe) for this all the time and even have a name for the quick, very context specific, questions I ask her: Sanity checks. "Is what we're doing, that I feel is wrong/strange, really wrong/strange in your eyes?". If it is, it tells me I'm at least not the only one feeling that way and if not I might get an explanation I've simply missed. An example would be the deployment strategies used in another peer's company and he often asks me: "Is this actually how software is built?". I do my best to explain how we work and/or simply say "no, I've seen... instead and it seems to work better at least in their context", too often though I just answer "unfortunately yes".

Generate ideas
It's easy to get stuck, either if it's when I'm testing, in my career or in some other aspect of my "professional life". This is once again something I constantly do with Helena: "I feel like my only options are..." and she replies "but what about...". Practical example: "I feel like my role is somewhat limited to...", "Have you tried speaking to your boss? To the developers?...". And those, seemingly obvious, answers just didn't occur to me in the middle of my "this shit is broken" kind of mindset.

Practice
A recent example is Software Testing World Cup, which I participated in with a couple of other peers. Another one is practical exercises we've tried in my local test community (EAST), one being us splitting up in teams and attempting to use various thinking hats while testing. And finally a third one is any workshop at a conference where you interact with other testers to figure something out together, sometimes by simulating something which is more or less impossible to do on your own.

Support
Sometimes you just need some cheering or a hug when things don't go your way. And getting that from someone who does understand your headache is very much appreciated, at least for me. Also, something Helena (once again) and I do a lot: Remind each other of how incredibly much we've improved/accomplished, which is easy to forget.

Coaching and Teaching
During a recent Peer Conference (more about those in a moment), James Bach explained his coaching model, which added a whole new dimension to the visual representation I had already seen. That's one example of how a peer explained to me his view/experience/knowledge on a topic. Also any (peer) conference talk is one tester teaching others, often followed up by an open season where "everyone can teach everyone on the topic including teaching the teacher". Finally Helena and I do a lot of coaching as part of our weekly sessions.

Second opinion
Before Let's Test I recorded what I wanted to present as a lightning talk. I sent the video to Helena and her answer was something like "that should be a lightning talk, it doesn't work as good as a video". In retrospect I definitely agree but in that moment it felt like I had to release it, simply because I had spent a lot of time recording it. Thanks to Helena, I didn't.

"Am I stupid" -check
This goes for closer peers, either as in geographically close (sitting next to you) or personally close as in someone you trust very much; preferably both of course. Sometimes I have an idea or question that just seems stupidly obvious. If I have a close peer I can quickly ask that person and (s)he either provides an answer or a "well I don't know either so I don't think it's too obvious". If I don't have that person to quickly ask, I easily go into wasting time waiting simply to either get brave or frustrated enough to ask or to slowly figure out the answer myself.

One practical example was a specific way to trigger a kind of failover in the product I was testing. I almost felt stupid asking if there was another way to trigger the failover. My close colleague (Saam Eriksson) next to me answered "I'm not sure but could be..." and after we consulted the expert it turned out there was a way that had basically never been tested. Without Saam I might had accepted that there probably was only one way to trigger the failover and the "other way" would not had been discovered for another ten years or so.

Fuel the fire
I quickly lose interest in things (my fiancé just nods right now, looking at me like "have you finally realized that"). Having peers who constantly push me, help me find the necessary spark when my motivation is low and inspire me by showing what's possible to accomplish, helps me stay motivated and curious. Also doing things with others (for me) is much more rewarding than doing them alone, so peers help me have fun.

Inspire
Related to that: when I heard Helena would help out planning a test conference it suddenly struck me "maybe I can do that to?", when I watched Alan Richardson's technical web testing webinar/course I suddenly thought "maybe I could record something like that" and the list goes on. There's an unlimited amount of possibilities but sometimes you need a confirmation something is possible/available and the combined ingenuity/bravery of peers often provide that inspiration/confirmation.

Job
There's a lot to say about jobs and peers. Some quick benefits I've experienced myself
  1. Peer review of CV/portfolio stuff
  2. Actual job offers, either from the peer or recommended by the peer
  3. Help choosing between job offers
  4. Recommendations
I've reviewed several CVs, mostly non-testers but I still add it here since knowing the context definitely helps. I've also had peers review my own CV.

My current job was an offer I received from Maria Kedemo and I've also turned down two other interesting offers received from peers... and, well, let's not get ahead of myself.

Also when I got the job I have now I had a hard time deciding between that job and another job. Luckily I know a person who happened to had been working for both companies. He could provide me with tons of valuable info helping me decide.

I've helped peers recruit as well as get recruited, by spreading job ads or recommend specific testers I know for specific jobs.

Finally, I've put in a good word for several testers (and programmers) where I've known the recruiter.

Peer conferences
Peer conferences are not for everyone as you're expected to contribute (present an experience report, speak up, share knowledge/experience) as well as be ready to get challenged on whatever you say. If you feel you're up for the challenge the reward is huge though (lessons, inspiration, network and new practical things to try)!

Being alone in your company/team
I'm the only tester in my "part of the company" (the company is basically split in two). In this situation I've found various testers (peers) outside my company to be invaluable as they help me "stay sane" and not get stuck on seemingly simple matters. My experience from being the only tester in the company, the only tester in my team, one of several testers in a mixed team and being in a pure testing team, is that if you're alone, find someone to share your tester burden with, otherwise it'll grow! Even though this person might not know your context that well, he or she can provide an invaluable second opinion or help you solve problems when stuck, especially when stuck on "simple things you 'should' know and feel embarrassed to ask just anyone about" (I envy you if you've never felt like that).

Super-peer
I call Helena Jeret-Mäe my super-peer. The difference is she knows so much more about me (as in my personality, private life, aspirations, weaknesses, everyday work headaches etc.). This saves a lot of time when explaining a problem and she can see connections between for instance my personality and certain problems, I'm not aware of myself. Most importantly though: We've built a deep trust in each other. She can question things I really don't want questioned and I will listen (things like not prioritizing family as much as I should) as well as push me much further as she knows my limits sometimes better than I do myself and vice versa of course.

Can't you do this on your own
You can do most of this on your own but much of it will be harder and/or take longer time and/or not be as fun. Since I reached out to the testing community my level as a tester has skyrocketed!

How to find peers?
Getting to know other testers seems fine and dandy but where do you start?

Twitter
Some tester once said to me, about testers, "if you're not on Twitter you don't exist". Even though that's to exaggerate it's still true you'll find most renown testers on Twitter and it's an amazing way to make that first contact. For instance, when I went to my first big test conference, I could immediately start talking to a ton of people thanks to Twitter and the conversations there. A good thing with Twitter: Everyone is entitled to add to a conversation so there's no initiation ritual, instead you listen in och when you feel you have something to say you join a conversation and voí la, followers will come (if you're nice and/or smart) and relationships will form.

(Peer) Conferences and courses
Peer conferences usually have fewer, and likely more dedicated, participants. They don't last for many days but the bonds created are, in my experience, very strong.

A larger test conference is perfect to get in touch with many new testers and broaden your network.

Courses are a bit like peer conferences in the sense that you can easily become very close to some testers in a relatively short amount of time.

If you need suggestions for (peer) conferences and courses, contact me (some info at the end).

LinkedIn
I mostly use LinkedIn to connect to my local test group after I learned about them but can work to find new testers to hang out with as well. Not, to me, as obvious to make useful though, as Twitter.

Meetups
If there is no local test meetups in your area, visit a bigger town to join one or start your very own local test community. If you need some inspiration: Check out Erik Davis' excellent post!

Blogs
Starting a blog about testing is good for many things (reflection, portfolio, getting feedback etc.) and one of them is, a good blog generates interest in you. Also reading other testers' blogs and leaving (sensible) comments on these will teach you things as well as help you connect with these people.

Software Testing Club
I don't use this community very much for some reason but every time I do I get impressed. Seems to be a great way to connect so try it out for yourself!

Colleagues
Why look far away when you've curious testers next to you? Suggest an educational activity like spending an hour watching some great testing presentation (need suggestions? just ask me!) and discuss the contents together. Who knows, maybe you'll find a colleague to keep learning with.

Summary
Don't isolate yourself, or other testers if you're in a test lead/manager position! Instead, try to reach out, ask for help, provide help, share ideas, socialize, listen, speak up and network. The field we work in is way too vast to be covered by one person alone. Team up and you'll be much more efficient... as well as have much more fun.

If you are, or feel, completely new, ask for a mentor (I can either help directly or hopefully get you in contact with someone who can help). If you know your way around but have done things in solitude so far, go to a test conference, start a blog, join Twitter or go to/start a (local) meetup for testers. Still feel stuck? Contact me! (use this blog, Twitter, LinkedIn or any other way you find... except tapping on my window in the middle of the night as it would probably freak me out).

Good luck!

09 July 2014

Software Testing World Cup, Test report

Credit
This is based on my team's effort in the European qualifying round for STWC (Software Testing World Cup). The team consisted of Agnetha Bennstam, Sofia Brusberg and me. Everything mentioned in this post is my interpretation of our collective work.

What is the Software Testing World Cup?
STWC is exactly what it sounds like: Testers from all over the world competing in software testing. You can read about it on the STWC web site but here's the bare minimum you need to understand this post:

To get to the finals you have to win a qualifying round. Typically there's one qualifying round per continent with a maximum of 250 teams each round and 1-4 testers per team. A round is 3 hours long during which the contestants test an application, revealed just before the round starts. Throughout the competition teams can ask questions to the application's product owner who answers these in a live video channel available to everyone. To get a better idea of how this works Matt Heusser have shared recordings of the video feeds. At the end teams are scored mainly based on a test report and the bugs they've filed.

Intro
To make any sense of this post you first need to check out our test report from STWC (in case worried; I've checked with the STWC judges it's okey to share our report).

My hope is this report can help spark some new ideas for other testers as well as, despite being written under severe time pressure, work as an example of how a test report can look.

Overview
Our test report is made up of six sections; Coverage, Bugs, Other risks, Suggestions, Conclusions and Strategy. Generally I think the report turned out pretty well; it's concise (not counting the coverage section), covers what the stakeholder asked for (including improvements) and communicates what we discovered. With that said there is definitely room for improvement...

Coverage
"What did we actually test?"

The reason for this section is bugs, risks and suggestions don't say much about what we actually tested, just where we happen to find notable stuff. Coverage is there to ensure further testing is focused on risky and/or not yet tested areas.

Given the limited time I think we did a decent job with the coverage section. Some of the categories are too broad and/or ambiguous (e.g. Search), thus making it easy to mistake testing from being performed when actually not. Also I think we should had communicated the coverage in a more concise way.

With more time to spend and better understanding of the product I would have liked to add a visual map of the coverage as an overview. Right now I think it's bits and pieces, missing the big picture.

Another way to improve this section would be to give a quick estimate on how confident we are in the respective areas as a way to show how confident we are in these areas. In our report there's no difference between "we made a quick test in this area" and "we spent a lot of time and effort testing this area", which hurts the section's usefulness.

Bugs
"How bad is it? Any reason to fear the product will explode in the customers' face?"

Lesson learned: Add a more business friendly description to bug reports and describe the business impact when actually reporting the bug! This time we didn't and thus had to quickly translate bug descriptions while writing the report. The result is not very impressive unfortunately. But I did learn something and, from what we discovered, I think the right bugs are listed.

Other risks
"Apart from severe bugs, what else did we observe/not observe that might affect release readiness?"

I like what we added to other risks, possibly with the exception of the potential availability issue described. I think that one was added because one of us pointed it out during the stressful last minutes, so we added it without really questioning if it was relevant. Compared to several reported bugs, not mentioned, and based on our interpretation of target audience it should probably had been reported as a minor (or rather potential) bug, not be part of the test report. But once again, overall I think this one did turn out nicely.

Suggestions
"Potential future improvements we discovered"

One of my favorite sections in the report. Could had been somewhat more detailed but to the point, relevant and what the stakeholder specifically asked for. I tend to always note down suggestions for improvements but adding them to the test report (I think) was a first to me. Will definitely consider adding a suggestions/improvements part to future test reports!

Conclusion
"Taking risks into consideration, what's our recommendation going forward?"

From a formatting perspective it's bad to let the page break in the middle of a section. Also I love the "edit Account editing" (time to proof read the report? obviously not enough). But, looking at the conclusion, I still find it relevant and correct even with some time to reflect. Another thing I like is it doesn't only present the stakeholder with one option, instead it embraces the fact we (testers) don't know the market and thus provides information relevant both for a "regular case" and a "we must release now" case.

Strategy
"What resources did we have at our disposal, what limitations did we have to adjust to and what ideas guided our testing?"

Since this report was given to an external customer we figured a rough plan might help their internal test team even though of course highly context dependent.

If you compare the time plan with my previous blog post you can see we didn't use the last hour to update it. I think it's close enough since we didn't diverge too much and updating the image would not had been a good way of spending our last precious minutes, however, a quick note on how we diverged from the plan, I think, would had been helpful/useful information. Also we wrote it on the form "we did", not "we plan to", which is simply wrong. Apart from that, nothing spectacular but hopefully somewhat meaningful.

Coloring
The header colors is there to make the report easier to overview. Apart from the red one for bugs and other risks they aren't very "self explanatory" but I do think they help the reader to find the information (s)he's looking for, especially when looking at the report a second or third time. One thing to improve is to make conclusion stand out more as I imagine a stressed reader would like to find this immediately. A different choice of, or way of using, colors might be a way.

Overall
I think we got the overall layout and categories right, I think most of the content was relevant and, after all, we only had 3 hours to finish planning, testing and reporting! For a 3 hour competition I think it's a well written report despite its shortcoming, which I hope I've highlighted well enough in this post.

To all the other STWC contestants; I would love to hear what you did different in your reports and how that turned out, eager to learn from it!

Finally: Thank you Agnetha and Sofia!

Lessons

  • Colors are great to add visual structure and make the report easier to skim/overview
  • Proof read more than once, stupid linguistic and grammatical errors hurt the report's credibility
  • Think about the actual business impact/description already when writing bug reports
  • It's hard to describe, especially in a concise way, what was tested and what was not
  • Don't write things on beforehand guessing how something will turn out, in that case, describe it as a plan
  • Improvements (what we called "suggestions") can definitely have its place in a test report
  • Don't wait with the report until the end, try to create as much as possible in parallel with the actual testing and planning

07 July 2014

Software Testing World Cup, Time management

Credit
This is based on my team's effort in the European qualifying round for STWC (Software Testing World Cup). The team consisted of Agnetha Bennstam, Sofia Brusberg and me. Everything mentioned in this post is my memory/interpretation of our collective work.

What is the Software Testing World Cup?
STWC is exactly what it sounds like: Testers from all over the world competing in software testing. You can read about it on the STWC web site but here's the bare minimum you need to understand this post:

To get to the finals you have to win a qualifying round. Typically there's one qualifying round per continent with a maximum of 250 teams each round and 1-4 testers per team. A round is 3 hours long during which the contestants test an application, revealed just before the round starts. Throughout the competition teams can ask questions to the application's product owner who answers these in a live video channel available to everyone. To get a better idea of how this works Matt Heusser have shared recordings of the video feeds. At the end teams are scored mainly based on a test report and the bugs they've filed.

The plan
We had a beautiful plan starting the competition:


Personally I have a history of not following plans so I was surprised when we actually executed pretty much according to it (I give Sofia and Agnetha full credit for this). Here is an adjusted, and somewhat extended, version of what actually happened:


First 55 minutes
After some minor technical issues with the YouTube channel we were up and running. While Agnetha and I mainly focused on the stakeholder's live video channel, Sofia started touring the application. We kept going like this gradually focusing less and less on the video until Agnetha mentioned we should stop and debrief what we had learned so far as well as discuss test strategy and possibly adjust the time plan. I would say Agnetha was our time keeper for most of the competition.

During the debrief/strategy discussion we identified 3 main areas to focus our testing on, based on the stakeholders' requests:
  • Email functionality
  • Screen resolutions (mobile devices)
  • Possibility to steal accounts (customers)
To make it easy we focused on one area each. As security is one of my better areas I went with the account task, Sofia had already started looking into the emailing functionality so she continued while Agnetha started playing around with the mobile devices we had/various resolution.

Whiteboard from the competition

Exploring and debriefing
For the next 65 minutes we tested the product and had a debrief after 30 and 50 minutes. When we stopped for the first debrief (once again thanks to Agnetha) we realized all of us were testing user settings even though this was only part of Agnetha's "scope". To some degree this was good because the user settings functionality was rather buggy, but at the same time the bugs we found weren't generally that severe and once again, this was not communicated as a high priority from the stakeholders. So after the first debrief I think we got on track. During the second debrief everyone seemed to be in the zone. We kept this debrief short, just briefly summarizing the more severe bugs we had found, what we were currently looking at and if we needed any help, to make sure the flow wasn't lost.

One big difference between our plan and what actually happened: We didn't pair test nearly as much as we thought we would. I think this was a good decision, it was simply too much to cover in too little time so we had to spread out our resources.

Wrap up
In the initial plan we thought 30 minutes would be enough to finalize the report, review bug reports etc. Luckily I was a bit nervous about the report (my focus area) and suggested we should start with just under an hour left. After all, we could always keep testing after the report was done.

Let's just say there was no time after the report! I reviewed my bug reports and realized some things were not clear or I realized I had forgot to add something, I added things to the test report and often questioned "do we really know that or are we just assuming/remembering it wrong" which required further investigations and so on. We also had to rewrite some bug descriptions to be more business oriented rather than technical. All and all, the last hour went by quickly.

Conclusion
Thanks to Richard Robinson inviting me to a 3 hour test session during Let's Test 2013, I had an idea of how short 3 hours actually is. Still... 3 hours is not much time! I'm glad we did a time plan before the competition and reviewed it during the competition. Without it I think the limited time would had hurt us much more!

... and it was fun to actually follow a plan, I might try that again in a few years .)

Finally: Thank you Agnetha and Sofia!

Lessons
  • Writing a good bug report takes a lot of time
  • Taking notes is crucial, we could had saved a lot of time when writing the test report with better notes
  • 3 hours is not much time
  • A (good) plan helps a lot when under pressure
  • Debriefs are valuable to continuously set/correct direction
  • Even when alone you can benefit from stopping regularly to review what you're doing
  • Visualizing (like a schedule or the image in this blog post) a plan helps as it makes it easier to relate to (at least for me). I think splitting it up in a way where the size of each activity is relative to it's length would had improved our visualization.

31 May 2014

Let's Test 2014

Let's Test 2014 is my weirdest conference experience so far. Put that in perspective with my other two being my first ever and my first ever as a speaker and you can imagine it was a special one. So what happened?

My mission
I came to Let's Test with a question: "What is my role" (both in my current job and how I would like it to be in the future) and the mission to gather as much understanding of this as possible.

I left with more than I could ever had imagine and still feel like I've just scratched the surface. I also left with something... else, something I haven't understood yet and with a magnitude I don't even know if I can grasp. If you feel confused, welcome to my world, I have some serious thinking to do.

Welcome!
Imagine this: You step out of your cab and immediately see two people running like their lives depended on it just to hug you. And we're not talking about a soldier coming back from a war zone, we're talking about a test conference! That's how it all started. I got a jumping hug from Meike Mertsch and a two minute hug from Helena Jeret-Mäe, can't ask for a better welcome!

Talking, testing challenge, some beer, dinner, another more large scale testing challenge, more talking... You wanna know one reason why Let's Test rocks? During the testing challenge in the evening some of the most well known names in the business was mixed up with newcomers and anyone in between. The result was beautiful and I can now say I know, from own experience, that for instance Rob Sabourin is a great test lead and Jon Bach is an excellent tester.

Ohh, by the way, what's not to love about someone immediately putting a testing challenge in your hands as soon as they've introduced themselves? That's Kristjan Uba, we'll come back to him but the short story from day one was he instantly made an impression.

Key lessons
  • Peers are invaluable! Peers are invaluable!
  • Twitter builds connections that can often be transferred into "real life" connections. So testers, get a Twitter account!
  • Seeing into the minds of some of the brightest and/or most well known testers in the industry was exciting.
Workshop day
The first day started with an interesting keynote by Tim Lister. It was basically his life story in the software business and lessons he had picked up along the way. It all came together as a beautiful experience report that provided just as much inspiration as it provided actual "content".

Key Lessons (Tim)
  • Be around people ahead of you.
  • Beautiful mindset from Harlan Mills: "If I get a compiler error I go: why did I get a compiler error". Illustrates the need to not just accept mistakes/problems but to learn as much as possible from them.
  • Life is short, stop waiting and just go out and make your own career!
  • Having fun and staying curios are key driving forces and should be treated as such.
  • Right and wrong is for kids, we're grown ups (see the full spectrum of possibilities).
Next up was Rob Sabourin's tutorial. That one just felt a bit too much of a lecture for my style (have quite a bit of useful notes from it though so not saying it was bad, just that I personally was looking for more interaction) so in the afternoon I moved (after getting an okey from the organizers) to Steve Smith's second workshop; "Managing your personal board of directors". I'm glad I did as this workshop turned out to be one of the highlights during the conference!

Key lessons (Rob)
  • Domain expert != Tester, we need to understand the skills involved in being a tester.
  • Decision tables are useful both to help you understand something as well as to spawn testing ideas. Can help you discover combinations of conditions and actions not thought of.
  • 30 years of experience as a tester does not mean 30 years of learning test
  • "Is this the problem?", repeat, repeat, repeat.
  • If a session just happens to not fit you, leave and go for something else!
Steve Smith's workshop had a simple setup: We started out identifying various "directors" influencing us (e.g. Chief Financials Officer advising you to save money/make more money). After that, one of us volunteered being the star (Kristjan Uba) and the rest of us played his board of directors. I ended up being "Chief Family Officer", a perfect fit I would say. Finally, after each of us had had the opportunity to ask a wide array of questions regarding our roles, we started a board of directors meeting. The meeting agenda was to solve three challenges/questions/dilemmas Kristjan was facing. It was fun (as in "laughing hysterically" fun) and, much thanks to an awesome job by the Mind Bender (Lars Sjödahl), the meeting really felt like something coming straight out of a head. I also wanna thank Kristjan for volunteering, he did a phenomenal job as "the star" in a rather exposed position.

Key Lessons (Steve)
  • How does your personal "board of directors" look? Which directors have the most power? Which are the loudest/most quiet?
  • Tim Lister talked about the value of peers, Steve Smith made something cool happen between peers and every moment of the conference proved the value of peers. So, like a wise man wrote just one chapter ago: Peers are invaluable!
  • Good experiential workshops are sooo powerful! (at least for me)
  • The more fun I have the better I seem to learn/remember.
Steve Smith's keynote
A 150 people experiential workshop as a keynote is just as awesome as it sounds. The mission was simple, almost to a degree of silly: You are 12 people, get as many of these to simultaneously stand, for 15 seconds, on a piece of paper placed on the floor. A judge will count your score and make sure you don't cheat. You get one point for each member and the ones that don't make it will become observers. These observers aren't allowed to interact with the rest in any way for the rest of the exercise. The exercise went on for three cycles and each time the paper was folded in half. Silly right?

Well, who is forced to become an observer? Who decides this? Who's ideas are considered? How much risk are you willing to accept and who decides that? What rules can be broken, what happens if you do, who will (try to) break them and who will support/reject this? How do people react when their ideas are rejected? How much will you think about your current context (members) and how much is generic solutions? Do points really matter and will anyone question this? Will you manage to be as creative as you usually are, under time pressure? What role will you take within the group? What happens when people with "key roles" become observers?

Silly changed to mind-boggling! I ended the second day taking a long walk with Helena Jeret-Mäe spending much of that talking about how I had taken more of a leadership role in my group (or at least I felt like that) and how I in this role had lost much of my usual strengths as creative rule challenger and went into "point tunnel vision". The beauty of experiencing this in the simple exercise was I could much more clearly see the outcome and what had lead to it as very little distracted me. Thank you Steve! Thank you Helena! Thank you team!

The final debrief also helped adding to the experience and hey, as part of the debrief I've now given a keynote at Let's Test.

Panda Panda!
I've never (I think) heard anyone say a single bad word about Pradeep Soundararajan. Instead everyone seems to treat him with huge respect, so it was interesting to finally attend one of his sessions. He did not disappoint! In the session he shared the story and vision behind his company Moolya. It was an hour of inspiration, passion, admiration and self-doubt if I really do the most valuable work I can for my current company. The basic message, as I interpreted it, was: "It's all about business value and we must understand the business problems to be useful". Just like Steve's keynote, this one left me with a ton of material that will require massive post-processing before I can fully understand it. Short version: Great session and Pradeep really lived up to his reputation! Panda Panda!

Carsten
I feel a bit guilty cause I needed the second day workshop basically to zone out and reflect on what Steve and Pradeep had helped me discover but not yet understand. The 2,5 hours of sleep I had the night before might have had an impact as well. But one image that stuck with me from Carsten's workhop was how much more clear a problem, and it's solutions, became when observing a group role play the problem compared to only have it on paper or, worse, in my head.

Martin Hynie and Christin Wiedemann
During CAST 2013 Martin said he wanted to speak at a conference and I told him I thought he would make a terrific presenter. He did! Well, both he and Christin did to be precise. I wasn't thrilled about the topic but went partly thanks to a rather weak lineup seen from what I was interested in: I did not regret my choice! For me this session was not so much about the presented content as it was a perfect demonstration of passion, creativity, curiosity and a willingness to explore/experiment, not to mention awesome stuff to use when I present myself (a slide deck made up by selfies, awesome!). Inspired to say the least!

Basic outline: Over a few beers the value of games as a means to train testers came up. No one could provide any real evidence for their claims that it did improve e.g. relevant cognitive skills. Due to this Martin and Christin not only went out to look at any existing data but they gathered a group of people to experiment with (including a "unicorn"; a 25-28 years old tester who had basically no experience of video games or board games), got in contact with a university and together with scientists they conducted cognitive experiments during which brain scans were made (Martin apparently has a big brain). All and all, scientists seemed to wanna keep doing these experiments and they now have at least some indications (even though the sample group was way too small) that testing games do improve relevant cognitive skills for testers. But don't care about the outcome, focus on what Martin and Christin did just because they couldn't find enough proof! Testers everywhere, take note and be inspired!

Lightning talks
I sent an email to Johan Jonasson before the conference asking if there would be any lightning talks, as I had one ready. The answer was: "Great idea, would you mind facilitating them?". I've never facilitated anything and knowing my track record when it comes to being patient and shutting up this couldn't turn into anything but a disaster. Well, it didn't, actually it went pretty good (I think) and it was a lot of fun. Also, in case you think facilitating seems trivial, it's not! I can't even imagine the difficulty level of facilitating e.g. a controversial talk with tons of arguing. I already had great respect for facilitators like Paul Holland, Ilari and Richard Robinson but holy cow, they rock!

So takeaways from the actual talks? Nah, I was way too focused (read nervous) to remember anything. But I did have a ton of fun and got some valuable experience!

Uncertainty workshop
Day three's workshops were crazy hard to choose between (I wanted to go to all of them). In the end I opted for Fiona Charles, much thanks to a quick chat I had with her the evening before. It was an interesting workshop and an interesting group to work in (a factory schooler made it even more interesting and sparked my first of two pretty tense debates during Let's Test). Uncertainty is one of those things I've learned to embrace and I actually do much of my best work when the plan is long gone and chaos is upon me (the flip side being I create chaos more often as I suck at following a plan). So my biggest takeaways were ways to communicate uncertainty not to mention I need to actually communicate it at some point. Some new tools but main takeaway was a started thought process.

The gem!
When looking at the program, one session stood out: Quality Leader - The changing role of a software tester. This aligned perfectly with my mission (figuring out my role). I had previously just quickly shook hands with Anna Royzman so I didn't know much about her but I was in for a treat. The actual session was great, not exceptional, but definitely great and it addressed many of my thoughts. What made the session awesome thought was Anna and a guy from Ukraine...

I think many from the session perceived Andrii as a clown. He did some pretty bold comments about testers' usefulness as well as not caring to use K-cards. After the session he stayed and a debate started between the three of us, a debate that addressed a ton of things related to my question. First of all, Andrii was a really smart man! He, just like me, was trying to figure out his role and we debated for almost 2 hours meaning I have nothing to report from Jon Bach's closing keynote. It started out as mostly a 2 on 1 with me and Anna against him. After a while it turned into more of a "free for all" where we slowly reached some kind of consensus.

I won't bore you with all the details but short version is: "Is (software) tester really a good name for the role many of us claim to have based on what goes into a pretty common view of what testing is within the software business context?". The outcome for me was something like "every person is a bit of everything, the title mostly reflects what we specialize in and/or do the most, this still might not make software tester the best title for many of us if we want to avoid confusion or limiting our ability to act but tester sure describes much of what we do even though tester and software tester are most often perceived as equal in our business". Well, I wish that discussion was recorded but I'll have to rely on my memory cause I was way to worked up to take any notes.

So thank you Andrii and Anna! You're a brave man Andrii and I very much respect that and you're a brilliant quality leader Anna!

Key Lessons (safety language somewhat off)
  • Be careful about speaking degrading about developers, many of us do from time to time (not excluding myself as somewhat seen below) and that's just not helping us.
  • We aren't set roles, we each create a unique, constantly changing role, based on our skills, ambition and current company need but often call them something generic for simplicity, don't be fooled by this though.
  • The landscape is shifting and more adopt ways of working where the "traditional tester" is harder to fit.
  • Don't tell people (like developers, product owners etc.) they test cause it'll freak them out!
  • Your job is not to think about quality, your job is to make everyone think about quality.
  • What will make users say "wow!" about your product? How can you test for that?
Barefoot
Day one I was approached by Ilari asking me why I wore shoes (just a regular day at Let's Test). I gave him a couple of reasons to which he gave valid counter arguments. All and all I promised to try walking barefoot day two, I did and it worked great. I often have sore feet in the evening of a conference day and well, barefoot solved that issue. Was actually comfortable outside as well (not counting some of the gravel) so I've not put on my shoes again since day 2 despite my girlfriend threatening to never walk close to me outside. That doesn't mean I will not put on a pair of shoes soon but you'll probably find me barefoot again during future conferences. Unexpected but very much appreciated experience.

Hugs and games
Someone said to me after Let's Test last year, the conference almost seems "unprofessional". What he was talking about was all the hugging, playing games, open sponsor bar etc. To me this is what takes Let's Test to the next level! Like Klas Hallberg so cleverly puts it: Having fun is not unprofessional but doing something in a boring way when the same thing can be achieved just as efficiently in a fun way, that is unprofessional (makes more sense in Swedish and Klas is probably not the first to say this but he's my source). And like I said before, I seem to learn better when I laugh and have fun which happens a lot during Let's Test.

People
Some, to me, new testers that left an impression:

Kristjan Uba - He's smart, dedicated and creative! After hearing Helena talk highly of him I had quite high expectations and he delivered! Immediately after shaking hands he handed me a testing challenge, he handled the star role in Steve's workshop beautifully and in almost any discussion he delivered snappy, insightful answers. Oh, by the way, as CFO I just wanna let you know Kristjan, you'll make a hell of a dad!

Graham Maxfield - This guy is smart and creative (good combo once again). He had all sorts of smart ideas during Steve Smith's keynote (that we foolishly rejected) and provided a lot of good feedback during other workshops and sessions we both attended.

Anna Royzman - After giving a great presentation she handled/facilitated her, my and Andrii's discussion perfectly and added many valuable points to it. This is a name I will look for in future conference programs and I look forward to speaking to her again!

... There are more people who deserves a mention but let's just say, a lot of you made an impression!

Summary
Greatest moment during this adventure? Getting back home hugging my kids. Boring answer but couldn't be more true. Second best moment; the two welcoming hugs and anything related to seeing my super peer Helena in person again for the first time since last Let's Test. Third best moment; when the lessons from Steve Smith's keynote just suddenly started to sink in.

Left to do? Massive post-processing! That "something" is still out of reach and there's more to many of the experiences I had throughout the conference that I just haven't figured out yet.

More substantial stuff to do? Put what I want my current and future role to look like on paper and try using decision tables on the business rules we have in the product I'm currently testing to find potential gaps/missed tests.

See you all soon (I hope) and thank you!

13 April 2014

SWET 7, peer conference on test coaching

SWET 7 (Swedish Workshop on Exploratory Testing, a software testing peer conference) has just come to an end and here is the promised summary and lessons learned from the event. The topic this time was "test coaching".

Don't worry Sweden

First of all, knowing that SWET 6 had few attendees, SWET 7 was almost cancelled and the group that finally signed up had far less experience than groups from previous iterations; you might be concerned about where context driven testing is heading in Sweden. Well, don't! There's a ridiculously talented group of up-and-coming testers. I feel fortunate to have met some of them already!

The venue

Villa Mälargården was a great venue. We had the conference center all for ourselves, it was well set up and the chef's passion (and skill) was a perfect fit, both for our palates as well as for a conference with passion being a common denominator. Thank you Michael Albrecht for doing all the arrangements, you did an awesome job!

Intro

Before the experience reports (which I will call sessions by the way) we went through practical information like how to use K-cards, what goals we had as well as making a check-in. Interesting this time was James emphasized stating possible distractions during the check-in. In at least two cases this definitely provided valuable information; Michael telling us about illness at home which he later had to, very abruptly, leave to attend, as well as Annie telling us about her migraine and how that could affect her which made me better understand her reaction when I tried to start a couple of discussions.

Before starting, James also showed and explained his view on coaching, a great way to ensure we all had a common ground to at least refer back to as well as for many of us to better understand what coaching can be.

Session 1, Michael Albrecht

Michael Albrecht started by speaking about a consulting job where several groups within a company needed coaching and training to change their testing process.

Takeaways
  • Set the goals with everyone involved (e.g. manager ordering and participants), follow up these goals with the same group.
  • There are many aspects of coaching to look at (and improve); e.g. facilitation, set up/preparations, actual questions/exercises, follow up, session quality evaluation etc. 
  • Coaching is not necessarily a one on one activity.
  • Propose a radical idea and ask why it wouldn't be a possible substitute what's currently implemented, to force people into thinking differently/see beyond what exists.

Session 2, Erik Brickarp

I want to discuss this a little further with Helena Jeret-Mäe, whom I share this experience with, before posting. But it revolved around Transpection Tuesdays, how, and why, we've integrated more and more coaching into them, the format we use and lessons learned.

Session 3, Liza Ivinskaia

Liza spoke about her first test coaching experience and lessons learned from this.

Takeaways
  • Survey's can be used as a quick way of reaching out to a lot of people for some simpler coaching and/or as valuable input to coaching/coaching decisions (e.g. who to focus on).
  • Spend time with the people you are about to coach to make sure you understand their context
  • Make sure you focus not just on what's wrong but also point out what people are doing right
  • Presenter technique: Draw a big mind map (e.g. on a whiteboard) to help audience "navigate" your presentation. To enhance, create or highlight the connections as you present.
  • "Don't look at it as a problem, look at it as a challenge"
  • Everyone has coaching experience. You've most likely helped a friend/sibling/child/partner in a challenging situation or tried to help someone understand something in school for instance. Some of that is likely coaching.

Lightning talks

Annie Rydholm
Talked about her coaching sessions with Carsten Feilberg and how these have helped her.
  • Find out the missing "because" by asking "why"
David Högberg
Talked about his failed attempt to teach a student taking notes and lessons from that.
  • Taking notes is a vital skill we need to practice
  • We all take notes differently, teaching someone "their style" is hard
  • Take notes throughout the day and at the end of the day, collect them in a mind map
Björn Kinell
Talked about a mistake when coaching, where he had been too impatient to let the coachee figure out an answer for himself.
  • It's easy to get impatient and just give someone your answers. Coaching requires patience.
Amanda Johansson
Talked about how she basically made herself obsolete in a team.
  • Making yourself obsolete in a team is an interesting goal to get you to coach and help people take their responsibility in building quality in as well as finding bugs.
Meike Mertsch
Talked about her coaching session with James Bach and how what she learned unexpectedly became valuable much later.
  • You never know when and what experiences/knowledge will come in handy
  • Explaining exactly what you do provides insight for yourself

Lesson: "Innocent girls"

I heard at least three times, by different testers and in different settings, "I was surprised this innocent girl was so tough". I think that is an indication we should stop assuming innocent girl (or well, any girl) equals weak/fragile. My own experience in life is boys might be better at hiding/faking but we're not tougher.

Lesson: Admitting being human

I heard David Högberg in several open seasons, as well as in between sessions say things like "did you admit?", "it's okey to be human", "admitting can build trust" etc. He definitely hammered in a valuable takeaway from SWET: There is nothing wrong with being human, in fact it often helps strengthen relationships and build trust!

The mentor

I want to give a special thanks to James Bach. I had tremendous respect for the man who help me find my way into testing, already before SWET 7. But after seeing him work with all these rather fresh participants (including myself) he has raised my admiration even more. He inspired, he encouraged, he supported, he taught, he lead by example all and all, he did everything to make sure we all got the best out of every single participant, rookie or veteran (or at least that's my view). That on its own was a great and valuable experience for me in community greeting and group mentoring. Thank you James!

Summary

If you look closer you likely know/have experienced much more coaching than you think.

... and there are plenty of up-and-coming testers to keep an eye on!

Participants

These are the people who taught me the lessons shared in this post; all being curious, brave and smart testers, worth listening to.

Amanda Johansson
Annie Rydholm
Björn Kinell
David Högberg
Liza Ivinskaia
Mikael Ulander

Thank you!

28 March 2014

What happens if...

Intro

The quality categorization below derives from the work of James Bach / the Test Eye.

What happens if...

... we don't care about security

Internet is playing grounds for brilliant people who like to figure out how to get passed applications' moats or bring them down. Leaked credit card numbers, leaked personal or other sensitive information, leaked passwords, malware on the customers' computers, defaced company websites and services unavailable, all due to security shortcomings in software happens daily. This can lead to lawsuits, loss of customers, the company losing a lot of money or, worst case, bankruptcy.

Even if your application is not using network communication, no one will care the day someone hacks/steals a computer with (or even worse, due to) your app and get their hands on sensitive information. Also few (read no) customers will care if it was a third party product that failed or not. The "we didn't build it so we don't have to test it" mentality is simply not productive.

... we don't care about charisma

The amount of applications is huge and that has changed how we pick our preferred ones. Now you can install an application, look at it, dislike its design (not professional, fresh, modern, elegant, beautiful, simple, safe... looking enough) and quickly move on to the next, without even "using it". I repeat, you can lose customers simply because your product doesn't look like the product it is!

"It's just an [pick your type of application], no one will care about how it looks" is rarely true anymore. Also remember charisma is not just "cool design features", it can mean anything from simple/elegant to looking robust/secure (we do judge an application by its cover).

... we don't care about usability

Just because someone initially choose your application doesn't mean they'll stick with it. One annoying extra step or interrupt in their work flow might be enough to trigger an "uninstall and move on". Even if customers are tolerant and the competition isn't that fierce, too bad usability can render key features useless (thus wasted money developing).

Also, usability problems might add a lot of costs for customer support as they'll have to spend time explaining things to the customers that the product should make obvious.

One big enemy to usability in my experience is programmer shortcuts. "Well the framework doesn't really support dropdowns so we had to go with five links instead". Beware of just accepting this, usually technology isn't the limiting factor even though it might feel like that when in the middle of things (my experience as a tester and a programmer).

... we don't care about robustness

A crash or loss of service, even for a brief moment, can be enough to get your application uninstalled. Also, downtime can be extremely costly for customers and, worst case, a customer may not just stop using the product but also demand compensation. No matter what, common/long down times and other robustness problems can quickly hurt the company's/product's reputation. Remember, time is money.

One mistake often done is many testers just open the application, do their tests and close it down/reset. Customers on the other hand often work for hours or even weeks without closing stuff down, can your application handle that? Are you sure?

... we don't care about performance

We are an inpatient, stressed society, if the loading of a webpage/application/service is not quick enough we'll simply not wait to see if the result is pleasing or not. Slow apps are quickly gone.

Once again some experience: If a function is really slow and can't be tweaked to go faster (e.g. a complex algorithm), maybe it shouldn't be in the product at all even though it seems valuable. Or the algorithm should be simplified or changed to become some kind of background job. All this to avoid frustrated users (if a function is in the product it's expected to work and work well!).

... we don't care about compatibility

I happen to like my browser. If you don't support that browser (support means, work better than the competitors, not "well, it does but...") I won't even consider you an option... And by the way, the world is going mobile and Windows is no longer "the only" operating system on the market.

So what happens if you "support Windows, iOS and Linux" but only Windows is actually tested and in reality your product hasn't even started on Linux since three versions back? Well, a customer could demand money back, which adds administrative costs, or compensation for lost money installing/upgrading. A company buying your product might ditch you on all platforms due to distrust or you might be headlining a lot more blogs than you wished for (or rather, not for the reasons you wished for). Simply put, it would be considered lying and lying is rarely appreciated. Also shortening the list of compatible systems due to uncertainty (due to lakc of testing) is of course a deal breaker for potential customers on those systems.

... we don't care about IT-ability

Every install/reinstall and update/upgrade (or lack thereof) is a potential source of frustration and incentive to uninstall. Make sure you avoid any unnecessary friction or even a great application may find itself deleted.

Also in bigger companies there's likely someone maintaining/supporting your product internally. Those guys tend to have quite a bit of saying when it comes to selecting applications, so you don't want your application/brand/company to be on their black list. "It seems like a great product but out IT department says it's impossible to have in our software infrastructure".

Make time

We all have limited resources and have to prioritize. What I'm trying to explain above is not "we need to double the amount of testers" but rather "we need to prioritize testing and make sure the right people focus on the right stuff".

Of course a big part of this is arguing for the importance of quality, not just for the functions themselves. I hope the list above can aid you in this.

Programmers

Many things are better tested by programmers (knowledge about what has been implemented and ability to manipulate code for targeted, temporary testability) and no matter what, a simple bug caught by a programmer and solved immediately is a lot less time consuming than if a tester finds it, has to report it, may be stopped until it's fixed/application rebuilt, the programmer has to stop and context switching likely happens for both parties. Don't underestimate the saved cost of programmers that test their own code well and (almost) always hands over high quality stuff!

Product owners

My product owner does an awesome testing job! What he does is simply making sure what he gets is what he asked for and since he wrote the requirement/specification/user story that lead to the code he knows what he actually meant rather than what can be interpreted from the text (meaning is often lost/changed when thoughts are translated to written language). His job saves me (tester) a lot of time figuring out if he actually means what I interpret he means.

Domain experts

In my current company, much of the technical complexity of our main product is in the handling of security cameras. In this area I'm glad we have a great domain expert who both tests and technically supports my testing when needed. This once again saves me time to focus specifically on e.g. security in this area, which I have more knowledge about.

Combined effort

I (let's hope so at least) have a broader knowledge about testing and potential ways to put the system in risk of failing. However, programmers, product owners, domain experts, graphical designers/interaction designers etc. know their own domains much better. Teaming up when testing is rarely a bad idea as it's a way for both parties to learn from each other and better utilize the strengths of each person (e.g. the domain expert shows various ways to feed data into the system while the tester suggests various kinds of data to try).

Not everything can be tested in isolation

Testing "what has just been developed" is not enough (which some Scrum fans seems to find provoking). Often when testing we simply need to zoom out, for several reasons:
  • E.g. Consistency, flow (as a usability aspect) and data flow (as in how data is transferred through the product) requires you too look at the whole, or a big part of the, product.
  • We learn new things all the time which also concerns old functionality and a difference between programming and testing is testers don't need to make expensive refactoring to apply new knowledge on what already exists.
  • We test how new functionality integrates with old, and sometimes those new changes also affect how old functionality integrates with other old functionality.
  • Security, performance, robustness etc. is not always efficient to test for a specific function, instead you need regular sessions to test it for all functionality in a sprint, from the last month or, once again, the whole product.
  • You don't always know what existing parts new functionality impacts and programmers are not always aware of what might, even on a code level, affect existing functionality (and if they are it's not necessarily communicated). Thus we need to simply "look over the whole thing" or look at a more general area every once in a while.
  • ... and when looking at old functionality we inevitably find issues we've missed before due to new tests, new knowledge, changed context etc.
What sums this up I think is: A lot of what's needed to test is not efficient/possible to test isolated. It sure sucks to correct one year old bugs as a programmer, but you need to understand: Testing is not programming nor is it "extended unit tests". To a tester, modules interact and form something together, they aren't isolated blocks with api:s... Oh, and by the way testers and programmers, never expect two sides to have interpreted an api-spec the same way.

A way to solve this in e.g. Scrum is to add stories aimed towards a kind of testing rather than a functionality to test e.g. "Security survey" or "User tests/usability" and if needed, compensate this with less testing of each specific function developed (which requires more testing work, as well as better communicated testing work, by programmers and others). Another solution is to add "system test" periods before release where the product as a whole is tested with not too much regard to what has just been developed.

Let's make this a requirement

Some of this can sure be requirements and/or added to a specification, and I'm not saying it doesn't help, because I think having people aware of e.g. the quality characteristics, writing the requirements/specifications is great. But a specification will never include everything simply because even if you have all your experts doing their best, a product changes from the spec, reality is never as problem free as planning and the context (users, market windows, staff etc.) will change in one way or another as the project progress.

So, yes, I think considering quality characteristics when writing specification/requirements is great but, no, it will sure not cover everything.

Customers never ask for these things!

True, in many cases they don't explicitly ask for things like security and usability. But it's not because they don't care, it's because they simply assume it will be there and if you do deliver and don't get any feedback about bad quality, do all your customers come back? Do they recommend you to friends and business colleagues, creating new business opportunities?

And sure, if you care less about security you might save some money and still never end up in trouble. But if that chance taking doesn't go as planned it might, as already said, put you out of business. So make sure you understand the risks really well before taking such a chance.

Summary

Customers will not stick to your products (except in rare cases, and you're not one of them) and your reputation will not be impressive if your products have low quality. And what great quality means can never be fully predicted or documented so let's not kid ourselves and say checking based on the specification is enough.

Make time by explaining consequences (thus getting the right attention), educate the people around you and making sure everyone do their share of the testing job!

Good luck!

20 March 2014

Lessons from testing my own code

The last 5 months I've spent almost every free minute developing a website for my kids' preschool. This blog post is devoted to observations done when testing my own code while creating it. Notice many of these are highly subjective and biased but if nothing else, I hope they can raise some interesting questions.



Observation 1: Finish developing (mentally) before testing
I found it hard to do good testing when I was in the middle of developing something. My brain seemed to be more focused on how to solve the next problem rather than critically question what existed.

On the other hand, when a function was done (both function and done is ambiguous, the key though is the feeling of being able to move on) I found it much easier to change my focus to "how can I improve this?" or "what are the problems with my solution?". I often setup a few tasks, completed those and then sat down and committed to make what already existed (including older stuff) awesome, before moving on planning the next tasks.

Lesson: Finish up mentally, before doing non-superficial testing.

Observation 2: Some testing was easier to do than other
Testing I do rather statically (e.g. following checklists, like OWASP checklist, or strict procedures) worked better than more creative or "subjective" testing (no right answer, more about judgment/opinion) compared to when I work with other people's products. In the latter case I often found it much harder to see the alternatives on how to solve problems and of course what I created was intuitive, right? .)

Two things I found useful when doing less static testing was scenarios and roles. When examining the product trying to observe it from the perspective of a certain user or had a certain task to perform I could better defocus and question the decisions I had already done... a bit like I was no longer responsible for the code "I'm the news editor now, not the developer".

Lesson: Scenarios and roles, I think, helped me fight "my solution is perfect" bias.

Lessons: I sometimes find checklists thought limiting when testing others' products but it did help to observe my own more critically/objectively.

Observation 3: Only testing new functionality was not as efficient
I discovered my "examine old functionality" sessions seemed to reveal more bugs, both in number and severity, compared to sessions focused on new and specific functionality. A few reasons for this, I think, is:
  1. I learned new things about the product/environment that inspired new tests and applying these to existing functionality often revealed bugs I had previously missed.
  2. I got some more distance to the functions making it easier to look at them as if I were a regular user, not the creator.
  3. I found my affection for the specific functionality (wanting my product to work) lowered over time. At some point it was like "I don't want this to work perfectly just to show to myself that I've gotten better as a programmer".
  4. Naturally I found it much easier to observe inconsistencies and problems with flows when looking at the product as a whole, not at a single function.
  5. I found it much more motivating to "play around" with the whole product and following my own energy seemed to make me a better tester than if I tried to "force myself" to test a specific part.
Lesson: Instead of spending all my time testing new functionality, spending a short session on new functionality and then add longer (or several shorter) general sessions every once in a while was a more effective way for me to spend my testing time.

Notice, I've observed a similar pattern at my job. Although, when being a tester in that context I also have to consider the fact that new functionality will be delivered to customers so I can't "save testing for later" as much and as a programmer I do prefer to work with code I've recently written. Still, scheduling time to just generally go over the whole product has often proved to be a valuable practice for me even when there's a lot of new functionality to look into!

Observation 4: Testing my own code really helped me think about testability
I think any company would benefit from pairing a programmer with a tester, not so much for the tester to get quick feedback on bugs or help with how some functionality is meant to work, but for the programmer to see what parts are hard to observe or control (read: test). Turning it around, as a tester it has been interesting to play with what's actually possible to achieve. Also, when I have the code in front of me I noticed I do a lot of things I would never had done on a "regular testing project", like disable functionality to enable better/quicker testing of back end stuff and making printouts I would normally "have to live without".

Testability, and especially ways to achieve/improve it, has been one of the most interesting lessons learned during this project, both from the perspective of me being a programmer and of me being a tester!

Lesson: Testers benefits from observing programmers/program as it helps understand what's possible and programmers benefit from observing testers/test as it helps understanding what makes a product testable (not to mention what strange ways users can "misuse" their creation .)

Observation 5: Programmers do a lot of testing better than testers
I found knowledge and control over the code invaluable, especially when testing "functional correctness":
  • I can manipulate code as I test
    e.g. remove client side form validation to quicker test server side validation
  • I know about technical solutions affecting testing
    e.g. due to a technical limitation we need to handle two seemingly identical requests differently
  • I know about the full implemented feature set, including "bonus functionality"
    e.g. no one requested drag and drop but it came with the framework, still it has to work
  • I know about the code quality/problems
    e.g. the user form handler is very messy, give it some extra testing attention
Communication can help much of this but it's hard as a programmer to know what a tester needs, it's hard as a tester to know what you need when you don't know what's available and it's hard to be fully aware of what you actually take into consideration when testing no matter if you're a programmer or a tester.

As the programmer you're of course likely to agree with your solution and test for the things you've already built protection against, simply because those are the problems/scenarios you're familiar with/have identified. Thus a second, third and forth opinion is important but don't underestimate the awesome bulk of unique knowledge the programmer possesses!

Lesson: Programmers testing their own code is extremely important, much of the testing they do will not be done by a tester and/or as efficiently.

Lesson: Pairing the programmer with a tester/test coach is probably awesome (in fact, I think it is).

Observation 6: Mixing up customer value and technical practicality
"That checkbox is not vertically aligned but I know it's really messy to fix so it's not important".

This kind of mind traps I found hard to fight against, even though I'm normally a tester.

The only real protection against it, that I found, was to force myself to solve any problem I found. In the beginning it was time consuming and demotivating (slowed the progress quite a bit) but soon I noticed it greatly improved my "base quality" as many of the problems were reoccurring until I dig in and learned how to solve them.

Lesson: Extremely low tolerance to faults really helps you improve! Of course I've had to let a few problems slide but I've always given them significant time, weighing the options/workarounds and tried to find consensus that a problem is impossible/not worth solving within the professional community. The stuff inconvenient to solve, technically, often provided me the big programmer "a-ha moments" when solved.

Summary
Forcing programmers to test their own code more will not make testers obsolete as well as adding more testers won't solve the problem where programmers aren't given enough time to test their code. Both are important and both add unique elements to testing!

I think programmers can severely affect their ability to test by how they behave/plan. For me, testing intensively while in development and trying to make "objective judgments", didn't work very well. Checklists and waiting with non-superficial testing until "mentally done", seemed to greatly improve my ability to test my own code.

So can programmers test their own code?
Yes, and they should but not alone for best result, that's my experience!

Agree?

15 February 2014

The ideal testing project

To get to the good stuff quickly without much explanation of why and how this list exists, jump to the "How to use the list" chapter.

What?
This "exercise" is rather simple:
Describe an ideal testing project

So first off, there's a lot of ambiguity in that statement. Ideal? To who? In what context? Testing project? Means what? What are the boundaries? Valid in what context?

Exactly what this means is not so important, and I'll explain why in the next chapter. But to give you a good enough idea to understand this post: "Using the experience you have, list the actions/rituals/behaviors etc. you think would make for the best possible testing". That still leaves you with a ton of ambiguity but hopefully you agree that's not the bigger issue here when I explain...

Why?
I often find when I try to look at my current context and how to improve it, I limit my thinking. By trying to list "ideal attributes" I can look at each of them and say: "do we do this today, to what extent, is it ideal in our context and how would the ideal scenario look in my specific context". Those questions, for me, tend to break that mental limitation and it's, once again for me, especially useful to see those, individually insignificant changes, that together really make for a huge improvement.

How?
Feel free to skip to the next chapter if you don't care about how this list was created.

I've tried this "exercise" three times. First time was in a workshop at Securitas Direct/Verisure on testability. One question was: "how can we improve our testability" and my brilliant colleague Niklas Grönberg, suggested we, instead of thinking about what we had, should think about how we wish things were. It generated several great ideas and everyone, no matter of current context, could add to the list.

Soon after I suggested trying the same thing on a Transpection Tuesday with Helena Jeret Mäe but focus on the project in general. We spent roughly three hours adding to the list and while doing it sharing ideas on how this could be achieved in our respective contexts.

Finally I tried the idea as a group exercise at EAST. In this case 8-10 people shouted out ideas that were discussed/further described and then added to lists on a whiteboard. When we were done everyone got a few minutes to look at the lists (we filled two whiteboards) and in an open forum we shared our key takeaways. I'll add a few notes from that at the end.

To support/guide the thinking a bit I added a few categories both when running it as an exercise on EAST and with Helena. This also (I think) helped explain what I was looking for. You will see a similar setup in this post.

How to use the list?
This is not attributes of an ideal project, it's attributes that to someone seemed ideal in some, to that person, relevant context. Thus you can't just take an item from the list and say: "well how do we achieve this". Instead I suggest you look at an idea and ask yourself, as described before:
  • Is this relevant to my context?
  • Why/why not?
  • Do we do this today?
  • To what extent?
  • How would this ideally work in my unique context?
Important: I do not agree with all the items below. But I wanted to add everything I picked up as it was mentioned by some skilled tester as ideal in a context (s)he could relate to and thus potentially helpful to someone reading this.

Also notice some of the items are in conflict with each other. Once again something might be valid in one context or aspect of a context while something else is true in a different context.

General
  • Clear boundaries/limitations throughout the whole project
    Time, resources etc. also knowledge about why it's limited and why the limit is set to what it is
  • Clear and well communicated goals and milestones
    Iteratively challenged throughout the whole project
  • Knowledge about users and buyers
    In bigger companies the buyer and user can be two very different groups
  • Access (throughout the whole project) to users and buyers
  • Knowledge about stakeholder
  • Knowledge about market (e.g. competitors and market windows)
  • Manageable amount of changes and quickly informed about these
  • No hidden agendas
  • A product you...
    ... like to work with
    ... find meaningful
    ... believe in
    ... want
  • Balanced work flow
  • Time to experiment
    Including product, working methods, techniques and tools
  • Being included and listened to
    E.g. invited to meetings
  • See the product being used
    And see that it matters to the customer
  • Low level of (boring) repetition
  • Some "hype factor"
  • Test ideas generated from day one
  • The agreed level of quality and customer focus is not lowered throughout the project
  • Everyone strives to achieve better results when agreed upon
    Includes everything from quality/scope to efficiency/cost/time
  • One focus at a time (no multitasking)
  • Testing being a respected and prioritized activity that everyone cares about
Start of project and general
  • Involved when it's a problem to solve not a solution to implement
  • Well communicated why (why it's a problem, why it's important)
    This includes being presented the real why, not the "official" why.
  • Kick off to meet, bond and discuss
  • Being listened to regarding testing issues and general risks we see
  • Everyone genuinely agrees upon what to build and why
    Also includes for instance salespeople and maintenance/support
Team
  • Motivated
  • Mixed group of expertise/experience
  • Skilled to do the job
  • A willingness to get better, both individually and as a team
  • Problems and concerns are shared
  • Having fun together!
  • Everyone is listened to
  • Respect for each other
  • Working well together
  • Domain knowledge
  • Clear, effective communication
Implementation discussions
  • Quality characteristics are discussed and clearly prioritized.
    This is an iterative process
    Stakeholders, customers, testers, programmers should be involved
    Most important is to state what is not that prioritized
  • All solutions should have a known "why"
  • Testable paper mocks or similar
  • Customer feedback on these mocks, communicated to the testers
  • Testability and relevant quality characteristics issues should be discussed as part of each solution
  • Always 3 options to consider (<- much debate to "at least", "roughly" or "exactly" three)
  • Key areas and lower prioritized areas are discussed and agreed upon
Test planning and strategy
  • Risks collected from testers, customers, stakeholders etc. discussed and prioritized
  • Discussions on how to test with programmers involved
  • Test process revisited and tweaked to fit current context
    E.g. what documentation, for who and why. When/how to involve programmers etc.
  • Testers, programmers, customers and stakeholders read, understand and commit to the strategy
  • Strategy is continuously revisited throughout the whole project
Testing / Developing
  • Feedback on errors (email, popup), rather than scanning a log
    (could be achieved with a tool scanning a log of course)
  • Being able to start a flow at any relevant position
    (e.g. in the middle of a setup wizard)
  • Being able to extract any relevant information from the product
  • Early first delivery and frequent deliveries throughout the whole project
  • Attention on bugs, not just new functionality
  • Programmers being aware of and taking relevant quality characteristics into account when developing
  • High quality code delivered
    Helped by code reviews, static code analysis, TDD/BDD, quick manual sanity tests, (automated) smoketests, skilled programmers taking pride in delivering great quality code etc.
  • Testers pairing up with programmers, customers, stakeholders when testing
  • No distance between programmers and testers (and stakeholders)
  • No delays from code written to code up and running in the test environment
  • All target systems/platforms/versions used by customers available to testers
  • Regularly time to test with and speak to customers
  • Why should be well explained and motivated for every demanded administrational action
    E.g. documentation and reporting
  • Iterative reflection on processes and tools that everyone is engage in, genuinely want to engage in and given time to act upon.
  • Progress is celebrated!
  • Everyone is having fun!
  • Testers are not the only ones testing
    Stakeholders, programmers and customers test the product as well, both while pairing with testers and alone.
  • Mistakes being embraced
    Admitting mistakes should never lead to shame, everyone naturally want to share mistakes done rather than keep them a secret.
  • Lots of questions asked and people taking time to answer questions
  • Amazing tools that support the desired process, not dictating it
  • Short optimized feedback loops
    Includes communication
  • Testers focusing on testing not checking
  • No stupid measurements and KPIs
  • Stakeholders accept relevant information, no extra translation necessary
    Much of this is probably better explained as both parties striving to understand each other and to learn speaking each other's language.
  • Team! Not individuals
  • Continuously challenging goals and milestones as more is learned about the product
  • Stakeholders clearly explaining why they chose to ignore certain risks communicated by testers
Finishing up
  • Customers blown away
  • World peace and end of world hunger
  • Time to reflect
    ... and result of reflection being used as input for the next project
  • Clear finish
  • Getting the time to feel proud
  • Being shown appreciation from managers, customers and stakeholders
  • Lessons learned shared between various projects (if projects are run in parallel with different testers)
Some reflections from the wrap up during EAST
As I said before we ended the exercise during EAST with sharing lessons and ideas to bring back to our respective contexts. One observation was most of them related to communication like ideas to improve communication or information someone should ask for. Another was very different kinds of lessons were described but everyone seemed to identify something relevant to their context that they might not had thought about before, or now thought about in a different way, which to me indicates that the list can be valuable to most testers.

Feel free to share your own additions as comments below, I will gladly add them to the list!

And to all of you who've added to this list already thank you! This is not my work, it's our work! Actually I even tried to shut up during EAST! (which went... okey I guess, at least compared to how it usually ends up)

Thank you for reading!