03 October 2013

Arguing for Exploratory Testing, part 1, Traceability

Background
The topic for my and Helena Jeret Mäe's last Transpection Tuesday was Arguing for Exploratory Testing. What we basically wanted to achieve was to get better at explaining the pros (and cons) about exploratory testing, in a concise way, as well as identify common preconceptions about scripted versus exploratory testing.

Input
We had defined 15 subtopics such as time estimations, credibility and making sure the important testing is done. The first item on this list was traceability which turned out to be enough material to fill the whole 2 hour session.

What is Traceability
First question was: What do we mean with traceability?

Our answer: Being able to track what has been tested, how, when and by who.

Why do we want Traceability
The next question was why we want traceability. We quickly formed a list but reading it now makes me realize we mixed together traceability and claimed benefits of having a trunk of test cases. But anyway:
  • External demands
  • Ensure work has been performed
  • Base for further testing
  • Support handovers
  • Create a map
  • Reuse
General thoughts
One thing we got back to over and over again was: The best way (often related to level of detail) to achieve good enough traceability is highly context dependent! For example having a simple mind map with short comments is good enough for one company while another requires every session to be recorded with the recordings being stored and indexed together with session notes, debrief summaries and saved logs. It all depends!

Another reoccurring theme was: "But do we really achieve that kind of traceability with test cases". I will not bring up those discussions much in this post but expect another one on "false assumptions about scripted and exploratory testing" soon.

Terms

Charter
Charter is basically an area to test, a way to break down a big testing mission. Notice though that as you test new charters might come up so it's by no means a definite plan. Read more >>

Test idea
Typically a one liner describing one or more tests you want to do. Read more >>

Session
A timeboxed, uninterrupted test sitting, typically 60-120 minutes. Read more >>

Debrief
Refers to an activity happening after a session where the tester explains what has been done to, for example, a test manager. This also includes clarifying questions, feedback and other kinds of dialog to help both parties learn from the session. Read more >>

Recording

We mainly refer to screen recording (video, either using a screen recording tool or an external video camera) but could as well mean record audio, save logs/traces or other ways to save what has been done. A good resource >>

External demands
This refers to regulated businesses (watch the excellent presentation What is good evidence by Griffin Jones), evidence in a potential lawsuit or customers demanding test data.

Possible solutions:
  • Record the sessions, preferably with configuration (device, version, settings etc.) explained if that matters. Adding commentary might improve the value as well (communicating purpose, observations etc.). This is also typically a scenario where logs/traces can be a required addition to a video recording. Once again, watch What is good evidence.
  • Store session notes
  • Store session summaries
  • Store charters
  • Store debrief summaries
  • Store test ideas (assuming they has been covered by your testing)
Creating support to find old information (index) seems key as well. For this charters, time stamps and/or categories might be useful to tag your save material with.

Ensure work has been performed
First question raised was: Is this really something we want to encourage? And our general answer is no; with the motivation that people in our experience tend to do things to look good rather than do what is needed/valuable when closely monitored. But being able to know that the testers actually do their job is closely connected to credibility and transparency so still a valid question.

Possible solutions:
  • Debriefs
  • Recordings
  • Notes
  • Bugs reported (a really bad metric for this but can indicates something!)
Debriefs seemed to most often be the preferred approach. During a good debrief the person being debriefed asks followup questions that will require the person debriefing to explain the testing done. A byproduct in this process would be to ensure that the tester actually did a good job / any job at all. But once again; if your focus is on monitoring, the people monitored (testers as well as non-testers) is likely to waste time proving job has been done rather than actually work!

Base for further testing
Let's say we've finished the prepared scope or are suddenly given an extra week to test something. If we can't go back and use already executed tests as inspiration, how do we know where to continue?

Possible solutions:
  • Having a bulk of charters as inspiration
  • Make comments about testing you've left out in your finished charters/sessions
  • Review session notes
We also brought up if there's a value of actually looking at what has been done. Often we found that the time it takes to analyse the work already done might not be worth it (information being too detailed making it hard to overview and learn from quickly). Simply exploring using knowledge we might not had had the first time or by having a different tester from when we first tested, is often more than enough to add value. After all, the time we analyse is time we cannot test (which might or might not be well invested).

Support handovers
One tester leaves (quits, parental leave, other tasks etc.) and another has to take over, how can we manage such a change when not having a set scope of test cases? First of all the new tester do have to spend some time getting familiar with the feature in exploratory testing but this is also true for using test cases since we, for instance, can't predict what problems we will run into thus can't prepare instructions for those!

But we can make it easier:
  • Charters (with status)
  • Debrief
  • Documented test ideas with already investigated ideas being marked
  • Session notes or session summaries
  • Mind maps or other test planning with already tested parts commented
  • Documenting lessons learned (like operational instructions)
Debrief in this case refers to a general debrief of what has been done, what we know is left, problems seen, lessons learned, where information is stored, who to talk to etc. by the tester leaving. Of course if the switch happens very suddenly (e.g. sickness) performing this is not possible and in that case it's important testers are professional enough to document what has been done (mind maps, short plans, visualizations, debrief/session summaries, charters). This is once again true for both exploratory and scripted testing.

Create a map
A bulk of test cases combined with statuses can somewhat be used to draw a map of what has been covered and what is left to test. How can we visualize this without test cases?

Possible solutions:
  • Charters
  • A mind map describing what has been tested
  • A picture/model of our product with comments about testing/coverage
  • Other visualizations like diagrams
  • The Low Tech Dashboard
A few important notes:
  1. You sure have a map with test cases but is it actually anyway near accurate? Say we have two equally complex functions. One takes 1 argument, one takes 10. We likely will have at least 10 times as many test cases to cover the second function. So if we execute all the test cases for the second function, have we really covered over 90% (with "covered" only considering these 2 functions)?
  2. Even if equally sized, that map would not cover what we didn't anticipate from the beginning so you still need to add an up to date judgement/evaluation (e.g. "wow that network protocol sure was more complex when we expected during the planning, we need more testing of it!").
  3. Scale is really important. Do we want to see Tartu, Estonia, Europe, the world or the Milky Way galaxy? We might need different visualizations to create all the maps we need (once again, think about value, how much time can we spare to keep these updated).
Reuse
Later a similar feature or a feature impacting the one we just tested is developed and we want to reuse the work previously done. How can we do this without test cases?

First of all, reuse is one of the places where test cases are powerful. However you have the minesweeper problem: If you walk the same lane in a mine field over and over, as new mines are constantly added, it's likely that the number of mines beside your narrow track start to build up while few will happen to end up in your path. Meaning, running the same tests over and over is less likely catch new bugs as creating new tests are so value quickly diminishes (more tests executed is not equal to more valuable ground covered).

What we often would suggest is to use knowledge acquired the first time as foundation for new testing to speed it up. Think about the new risks introduced and what needs to be tested based on that (like with new functionality) rather than how old test cases might fit into your testing.

Possible solutions:
  • Reuse of charters
  • Reuse of test ideas
  • Look at old session notes / summaries
  • Use old recordings (the simpler the form of the recordings the better for this, watching several hours of screen recording is probably waste)
  • Start a wiki page/document/similar for each feature and add lessons learned, where to find info, problems etc. as you test.
Summary
There are many ways of achieving traceability (and similar potential benefits of test case trunks) in exploratory testing, Session Based Test Management principles seems to be the most straight forward way but keeping track of test ideas or using other approaches works as well. All have their own contexts where they seem to work best (e.g. SBTM might add too much overhead for a simple project).

All and all, if someone claims "You lose traceability with exploratory testing", ask what that person means more precisely (e.g. present testing data to customer) and explain the alternatives. Notice this is only based on our two hour discussion and there are a whole lot more to add so think for yourself as well! Also question whether you actually achieve the kind of traceability requested using a scripted approach and to what cost. Finally question if the requested traceability is actually worth its cost no matter if exploratory or scripted testing is used. Doing unnecessary work is wasteful no matter what approach you use.

Finally: There are still contexts where a highly scripted approach is likely the best option but the closer you get to a pure scripted approach the fewer and more extreme the contexts become.

Thank you for reading!

And thank you Helena, see you next week!

27 September 2013

Going Exploratory

Prerequisites
  • Test process dictating a factory testing kind of approach.
  • Progress was presented as pass-fail ratios with a brief comment at best
  • Test cases were saved in a humongous, very general, database application.
  • Bugs and requirements were saved in another humongous, very general, database application.
  • Most documents were based on big, thorough templates.
  • All development was split into features, a feature was then further split into smaller packages. A large feature could take over a year to finish (implementation description written to hand off to integration testing) while packages were typically released once ever two weeks.
  • The product was a huge real time system (middle-node in an even bigger system), had legacy from 1975 and was mostly written in a language older than C.
March to June - Starting out
Early in 2012 I was asked if I wanted to join a pioneering (second), self directed, cross functional team at my former job. It was the golden opportunity I had been searching for, in my wish to introduce more context driven thinking. I answered yes immediately.

Eager to put everything I had been reading about into practice I started scouting my possibilities. First mission was to figure out what "self directed team" actually meant. The answer was rather disappointing: "Well, you control the inner process but you have to provide the same output as everyone else" (documents, test case storage etc.).

I decided to interpret output a bit more... general (ask Meike about my willingness to test or bend rules). I began asking the various receivers of progress reports, strategy documents and other output, what questions they tried to answer with these artifacts. I then decided to interpreted "answering the same questions" as "same output", which would later stir up a lot of criticism but luckily noone seemed to know who was i charge of us at this point so I got away for now.

I began to wildly change everything around me, more or less removing the whole foundation on which all our previous testing relied without adding any new. For a while it seemed fine. I got shit done! It was fun, it was quick, it felt efficient, I was -exploring-!

July - Chaos

Vacation time, everyone was calm, just mentally preparing for a month of freedom.

... except anyone in contact with the testing in my team...

Everything was in chaos! In the midst of trying out everything I wanted to implement (but didn't understand), I had lost track. To make matters worse I had to, in the middle of this chaos, hand over my work to an experienced tester who would join my team starting the first day on my vacation. I scratched my head trying to figure out what I had done and what I wanted him to do. I sat down and started writing, trying to at least make it look like I had things under control. Doing this I realized my team was not informed about half of the stuff I called "how we work with test in my team". The dreamy world I had been living in dimmed away as reality slowly materialized in the document in front of me. When done I could at least breathe a sigh of relief, I hadn't screwed up everything (yet). I handed over my work and went on vacation.


Vacation
A well packed schedule was turned upside down when my youngest son was hospitalized with a severe bacteria infection. All the traveling back and forth to the hospital gave me plenty of time to reflect on my work. I slowly started to see ways of turning my mess into something valuable.

August - The turning point

Vacation ended and I headed back to work. The day I came back my new testing colleague, who we can call Saam, since that's his name, left for his four weeks of freedom. I started looking at his testing. It was also chaotic, but a lot more structured (call it professional if you like) than mine. I started to clean up my mess by attacking the main problem at hand: How to know what have been tested?

The first solution was nothing fancy, but oh so important: A simple spreadsheet with functions to test on one axis and general stuff like how the function works under load or during restarts on the other. Each box that formed had a color representing its status, yellow for unknown, red for open bugs, white for invalid combinations and so forth. It was basically a simpler version of the Low Tech Dashboard, which I unfortunately didn't know about at the time (if I were to redo it I would use the Low Tech Dashboard as my foundation). The document started almost entirely yellow (unknown) but at least I had a plan.


When Saam came back we started talking about how to work with test in general and found out we had very similar ideas. From here on forward things worked better and better, thanks to brilliant chemistry and adventurous but realistic approaches.


September to October - Upping the pace

Saam loved to dig into technical details, I loved the more overlaying principles, together we rocked! As we learned more and more and started to form a routine I began documenting the process that had emerged. It all ended up as a concise, rather visual, easy to grasp, four page process description, probably the shortest document ever created inhouse (of course not exaggerating even the slightest).

It basically said:

  1. Start a wiki page for the feature. Fill it with information like; lessons learned, operational instructions and where to find more detailed information. This wiki page is updated regularly throughout the whole feature and emphasis is on keeping it as short as possible, only including valuable, up to date material.
  2. Learn about the feature, read docs, speak with programmers and experiment with what exists already. While doing this, document test ideas and a brief test strategy. Don't plan to much but make sure good ideas aren't lost.
  3. The moment before code is shipped to us, grab the developers and possibly a testing expert/domain expert. As simple as possible, present our test ideas so far and ask for feedback.
  4. When the code is shipped, use the test ideas as input, but don't in any way limit yourself to only test those. The goal is to steadily add new ones as more is learned about the system and feature.
  5. As the perception about the status of a certain area changes, update the current status (color code) in the spreadsheet and add a few comments to help explain the situation. This spreadsheet serves as progress and coverage report.
  6. When done with testing in an area (group of charters) sit down together, look at the test ideas now documented and see if there is anything important left out based on current knowledge. Consider inviting a developer.
  7. When the whole feature is done, book a feature retrospective to reflect on the work done; what do we want to bring to future features, what could be improved, what should be avoided, new ideas, lessons learned, gaps in knowledge and so forth, both regarding the approach and the feature tested.
  8. The output of this meeting is an end report. The end report is basically the wiki page with ending notes added. The receiver for the end report is integration testers, testers testing the same/similar features as well as stakeholders interested in the work done.
  9. Anything in this process is subject to change if the context suggests so.
The example above is stripped from everything company specific but that more concerned how things were solved in our specific context anyway. Also the original process included a short description of how to actually work with exploratory testing in a way that provides visual structure and a bit concerning flow. It had it's flaws, sometimes based on lack of experience and sometimes on context (like how well we had to keep track of test ideas since test data in this form was sometimes requested by costumers), but it was a tremendous improvement.

By the way, the spreadsheet was later replaced by an 8 hour PHP hack... imagine being able to improve your situation by replacing a tool costing thousands of dollars with an 8 hour inhouse hack. I felt pretty good about myself after that day.

November to December - People finding out
Up until this point very few seemed to care about our work. Some liked our new approach and the qualitative reports we created was appreciated but nothing more.

Suddenly someone noticed we had left out basically everything described in the official test process. Our work was questioned, our judgement was questioned and people demanded we fixed our "mess" (document test cases done for instance).


Luckily for us we had prepared ourselves both to explain our decisions, how the new output fitted in the old system and how traceability was still achieved. We heard some grudges but since the testing in the feature seemed to be a success and the transition towards agile was in focus it was actually received rather well. Especially my boss at that time, and the two test managers, deserves a lot of credit for their reactions (they also knew to some degree what was happening in the team as we stopped making most of it a secret when things worked well again).


Today - Trying to support from the outside
The brilliant colleagues I had, including Saam, keeps pushing for the ideas we introduced. I try my best to support them but this blog post reminds me I could improve on that.


Important to note: The company in question has developed a lot since this story happened so it's not in any way representative for how they work now, it's just a story about change, nothing else!

Lesson: Breaking rules should not be your first option
If you think what I did was irresponsible, I definitely agree. I do think you sometimes have to bend or break rules to make change happen but it should really be your last resort if the stakes are high and done with great care. Speaking to whoever is in charge and change things that way is generally a much better idea... what would had happened if Saam never joined the team for instance?

If it's completely impossible to change something without going behind everyone's back, a new job is probably a much better option.


Lesson: You can change most things
To continue on my previous lesson: Notice that I've never found something that with the right skill, time and effort, didn't seem possible to change though. I know it exists (e.g. strictly regulated industries) but it's rarely the problem. So trying over and over again, staying naive at a healthy level, is a great skill/trait.

Lesson: Experience is invaluable

If someone tells me:
- This idea with lightweight test processes are just bullshit
My answer would be:
- Let me tell you about when me and Saam...

Lesson: Testing requires skill
I knew the theory about how to organize testing in an exploratory way already in June but it took me many hours of practicing to really become skilled enough to perform efficient exploratory testing. It's like driving, you can't learn just by reading, you actual have to practice (but reading can speed up the process).

Lesson: My unsuccessful experiments were just as valuable as my successful ones
When I started to reflect on my work during my vacation I used my experiences so far to find problems to solve. When doing this I developed an understanding of each problem that a successful experiment probably couldn't had provided me. This was useful later when I ran into other problems that I could somehow relate back to problems I already had encountered.

Lesson: Reflection is just as important as planning
Reality constantly crushed my plans but in the beginning I was so focused on getting forward I completely neglected this. When I finally started reflecting, it became an invaluable tool to keep me on track. Plans didn't take surprises (changes, my misconceptions, my simplifications or my lack of understanding) into consideration. All those required extensive reflection to figure out as I learned more and more.

Lesson: Adaptation is key
I tried to just pick an interesting concept and force it into my context, this failed badly most of the times. When I started to look at what I was trying to implement to see where and how it fitted, my attempts to introduce things suddenly became efficient. It's like solving a jigsaw puzzle with the difference you have an unlimited amount of pieces, only a few can fit in a good way, when you actually find a good piece you still need to cut it to make it fit and as you solve the jigsaw puzzle the picture you try to make changes... so kinda not like a jigsaw puzzle at all but maybe that gives you a useful mental image.

Lesson: Learn from others' experiences
One thing I used when trying to turn things around during my vacation was things other testers had told me, stuff like "we did like this but it failed due to that". Exploratory testing was nothing new at the company, great testers already used it in secrecy, they just couldn't solve the problems they had run into trying to fit it into our process (problems like knowing what they had covered and traceability). What they told me helped me realize problems I had to solve and raised questions I had to be comfortable answering as well as provide solutions to problems I already knew about.

Lesson: Discovering my behavioral warning signs was great
Today I know I'm in trouble when I for instance stop sharing things with others or when I start to give very many reasons (rather than a few very clear ones) to why we should do something. Both are signs I'm either off course or at least don't know what I'm doing good enough. The latter can sometimes be a necessary step but at least it's good to be aware. When I discover one of these signs I can immediately stop what I'm doing and start reflecting to ensure I'm not drifting off in a dangerous direction.

Lesson: Being the only judge is dangerous
In my team I was the only one with testing experience (until Saam joined). This led to the rest of the team trusting my judgement when it came to decisions concerning test (which is to some extent reasonable). As soon as Saam and I started to work together we began questioning each other's opinions, ideas and directions, helping us to stay on a healthy course.

Lesson: Reasons are not proof
I can provide reasons for pretty much everything (including why you should jump off a cliff) but that doesn't mean everything is right (including the part where you jump off a cliff). I argued for my decisions early on, providing tons of reasons, even though I was way off. Now I know better than to accept peoples reasons, instead I look for lack of real life experience, generalizations, false connections and other assumptions (including my own).

Lesson: It's important to have someone to discuss with

It's no coincidence everything started to improve when Saam and I started to work together. When I was alone (Saam came to similar conclusions) my creativity dropped, my willingness to stand up for the changes I wanted to perform decreased, my motivation dropped significantly, my feeling of trust in my results dropped, learning was not happening as rapidly and finally I got stuck a lot more often and for longer duration (even when Saam couldn't help me it started the process of involving someone who could cause obviously I wasn't stupid asking whatever question it was). Having someone to quickly turn to whenever I needed a second opinion or some quick guidance was key.

Lesson: Exploratory testing needs support from your process

I didn't understand how crippled my exploratory testing efforts would be by the not adapted surrounding test process (like test reporting, test strategy guidelines, traceability requirements etc.). For instance the test management tool everyone had to use had no support for rapidly changing test scope and qualitative progress reporting. To even give us a fair chance of succeeding we had to work that out first.

Considering what kind of environment we actually created, my advice would be to isolate yourself. So instead of trying to change the world around you (which many will object and in the end you will probably just be allowed to change half the stuff anyway severely crippling your chances of success) try to build a micro universe where as much as possible is adjusted to fit an exploratory approach without colliding with the outside world. In agile, having an independent team with tested code as the only demanded output and working in an isolated part of the product / team owned product, could be one way.


Lesson: Education can save time and improve quality of test

In the beginning the exploratory testing didn't work efficiently. One reason, I believe, was I didn't really incorporate learning. Instead of stopping my testing effort when I felt my knowledge was insufficient I continued just putting out each fire I ran into (learning the bare minimum to continue). This was sometimes the right decision but often not as it constantly interrupted my testing flow. As soon as I started to spend longer stretches educating myself in an area (not just to solve one single problem), I also started to get a better flow and I started to recognize valuable tests I had previously not seen.

Lesson: If you can't argue for something you don't understand it

I've found a lovely exercise I wish I had performed earlier. Before I suggest a change I run it in my head trying to defend it against all the critique I can imagine. If I find a question I can't answer I try to sort that out before I continue. Notice, that the answer is context dependent so you can't just "learn" answers, you have to go through this over and over (but after a few times the loop will be faster as long as the suggestion is not too complex).

Example:

- We should remove the storage of test cases
- But without storage we can't reuse them
- Have we ever done that? For instance, it takes more time finding an existing suitable test case than to write a new one?
- But we lose traceability
- We could store session debriefs or one liners of tests we've performed, it would be a lot cheaper. By the way, when do we need traceability on that scale?
- Well, what if a customer asks for exactly what we've tested... or our test manager?
- Has that ever happened? And is it really worth all this time and storage money just to achieve that?
- But what about new tester, where will they go to learn test
- Our test cases are used for this purpose only due to lack of something better. I think the teams can manage this a lot better with pair testing and internal competence boosts. Also the test scripts don't really explain why you should do certain things partly because people stop writing when it becomes to complex (easier to figure out as you go), so the competence value is minimal as it is today.
...

Wrap up
This is one of my proudest moments, most frustrating experiences, most motivating explorations and definitely one of my most educational adventures. The lessons listed is just a brief start, we learned so much and I still learn things from reading this now.

Finally, this blog post was almost entirely written in December last year. I can now see lacks in my understanding of exploratory testing and how colored I was from only have been working at the company described. That's a lesson in itself.

Thanks for reading!

25 September 2013

Transpection Tuesdays

A couple of weeks ago me and Helena Jeret Mäe sent a few cryptic tweets about "Transpection Tuesday". Since then Helena has written a great summary of our first session and to add to that here are my thoughts so far.

What is Transpection
The best explanations I've found are James Bach's explanation of transpection and Michael Bolton's transcript of a transpection session. But basically it's one person asking questions to another person with the twist that the one asking has already tried to answer those questions. The result is you get two sets of answers, hopefully less biased from each other.

What is Transpection Tuesday
When I first met Helena it was during Let's Test earlier this year. She had tried to grab me a few times but sessions and other conversations had interrupted us. So suddenly during lunch she stood up and said: "You sit here!", and pointed at the chair in front of her. And so I did, which was one of the smarter decisions I've taken as a tester.

Transpection Tuesday happened in a somewhat similar fashion:

Helena: Hey. I was thinking this morning that we could have Transpection Tuesdays (because it rhymes :P) or something. if there's something each of us is trying to work through and solve, then we could try this transpection thing out. It's a bit random thought
Erik: Sounds cool... I'll take care of the kids alone tonight since my fiancé is working so suits me perfect... with the possible problem that a kid wakes up and I have to leave very suddenly and might not come back .)
Helena: I understand that and won't hold a grudge :P
Erik: In that case I'm in! Usually kids are asleep after 19:30 CET, hopefully there's no change tonight

So after some technical difficulties we started a Skype call with video. By the way, video was awesome for this since it made sharing visualization easier, body language is useful when trying to explain something and it helped, at least me, stay focused.

So to summarize:
Transpection Tuesday is simply a Skype session ~3 hours long, happening every Tuesday evening between me and Helena, dedicated to talk about testing.

Confession
Okey, so what we've done so far has been far from just transpection. Instead it has more been like a weekly dose of conferring where we discuss topics that matters to us in various forms; where form depends on mood and topic. You could argue the name thus is misleading, and it probably is... But we like it.

What did we discuss?
Helena shared our combined notes, from the first session. I won't write anything detailed about it now but if something in those notes seems interesting, feel free to leave a comment and I can try to write a future blog post about it (my guess is some of the more "community challenging" stuff would fit that criteria).

How did we discuss?
I recognized five different "styles"/formats used during our talks, plus one suggested that we haven't tried yet:
  1. Transpection
    I tried this briefly when speaking about why/if we should send people to courses and Helena did it much more extensively during the last TT when speaking about bias. When talking about these topics one of us asked questions we already had answers to but felt we wanted to improve/challenge. What makes this powerful is the person answering last get slightly less biased since he/she hasn't heard the first person's answers which gives you interesting answers to compare.

  2. Conferring
    Most of the talking was just quick questions being thrown out and then we together tried to answer and discuss them. I would say this is the style we use most (it's like our "go to" style) and for those of you who've been to Let's Test or CAST, this is very similar to the discussions you typically end up in during lunches (which is a good thing!).

  3. Challenging
    I plan to do a lot more of this going forward because it felt like an awesome way to learn. Essentially it's one of us asking the other to clarify something to a great level of detail or challenge a claim being made (similar to what happens during RST for instance). Very powerful as assumptions were uncovered and you constantly had to be on your toes.

  4. Role play
    We take opposite roles (e.g. the manager who questions the professionalism in ET versus a context driven tester considering ET a good approach in the current context) and simply fight it off with the goal to pressure each other to explain/motivate something at a deep level, as well as highlight assumptions. We have actually done this very briefly but I don't think enough to really call it role play, more quick imitations and reactions to that.

  5. Transfer
    Simply one of us telling the other something without any questioning going on; so basically a monologue about a topic. What was interesting to me was the information I received this way really seemed valuable but missing the interactive aspect hampered my ability to connect it to something else/make "lasting connections". Also I often felt the energy was lost a bit when one of us talk for long stretches.

  6. Interview
    Similar to Transfer but more driven by questions so basically the receiver "choosing" what information he/she is interested in. Better than the monologue version but still a lot less valuable than any of the more interactive ways. It was like, even with questions, I didn't get my brain into gears, so new information simply didn't stick as it was when e.g. Conferring.
Improvements
Here are a few things I've noticed:
  • Having a clear topic helps as we got fewer side tracks and thus easier could focus on going deep into the current topic. Side tracks are not evil per se, but too many and too big ones seem to make discussions rather shallow (my feeling so far).
  • Having prepared followup questions would had been helpful, we have improved this for next week, will be interesting to see the result.
  • We did get deeper into the topic during our third TT as we pressured each others to better explain what we meant and why even when agreeing. Think we can do this to much greater level though.
  • I need to try to quickly structure the information in my head, before starting long rants/monologues about something.
  • I think both of us could benefit from monitoring our own speaking trying to stop ourselves during long monologues. It's much harder for the other person to recognize a good break point.
Wrap up
So, Transpection Tuesdays are not so much about transpection but rather about weekly conferring. I already look forward to next week (preliminary topic: "Arguing for Exploratory Testing") and that's a great sign. I hope this turns into a long lasting tradition and it will definitely be the foundation for many future blog posts!

03 September 2013

My CAST presentation


Slides
Handout (used during the exercise)
Skill Development List and Personal Manifesto

* The links are currently broken, unfortunately I cannot fix this until I'm back at work since I don't have the files on my home computer anymore. Stay tuned...
And credit to Srinivas for pointing this out!, thank you!


Feedback
One cool thing about presenting at a conference like CAST was all the feedback I got. First Ilari, Bernie Schroeder and Martin Hynie gave me some great things to think about/improve in the way I was presenting (avoid looking back at my slides, tweaks to my body language, tweaks to tempo and pauses). Simon Schrijver, Erik Davis, Phil Kirkham and a whole bunch of other people really boosted my confidence with their encouraging comments afterwards. Peter Walen, Brett Hinton, Jonathan together with several others asked brilliant questions and/or added valuable content during open season that helped me understand my topic even better and finally a handful of people from the audience shared stories related to my talk. All of you who attended: You really helped making my first ever conference presentation better both for me and, I'm sure, for the other attendees! Thank you!

Another kind of feedback was the recording. Listening to that helped me realize:
  • It took me a while to get the energy going
  • I had some annoying "ehh" and "and" early on
  • My flow was really nice
  • I should not interrupt the facilitator
  • I've always been a bit annoyed with how my voice sounds but listening to the recording has helped me appreciate it instead.
  • I felt several pauses was just a fraction too short (for me)
  • Well rehearsed parts did not sound "flat" or "boring", rather the opposite, so I'm very much questioning if I can "rehears too much" which some people warned me about before.
  • 26:13 (masochist "joke")... WTF! Still hurts to hear myself say... whatever I tried to say .)
  • I felt my energy raised every time I made some kind of imitation (THANK YOU HELENA)
  • The tempo was nice, could have varied it a bit more for greater effect but better than I expected
  • I hardly rambled / repeated myself at all... once again I think rehearsals played a key role.
Forgotten part
Remember how I added balance in my 10 second summary of Yes Man (31:10 in the video)? Last winter I realized I couldn't just answer yes to everything. Sure cool things happened but that required me to spend quite a bit of time away from my family. That, after a while, became too much time away. So family has turned out to be something I need to think about before answering yes to too many cool things. Your balance might be something else like time with friends, time to sleep, travel costs, mental energy etc. No matter what it is, keep it in mind.

But that small addition was all I missed! Hurray!

Answering a few questions
I got a few questions I couldn't answer during the open season that I promised to get back to when I had been thinking a bit more so here we go:

Jonathan: Was there anything on you list you tried but you completely failed at?

Just before I started the list, at my former work, I tried to change how we tested in my team to make it more "Context Driven". I failed in so many ways, more or less putting the team in a position where noone knew what was tested, what was not tested, how well the testing had went and there was not a single document or statement that could help anyone figure that out, mostly because I didn't know either.

Later, actually partly thanks to the list, all that turned into one of my coolest adventures as a tester, but up until that transformation occurred it was a major mess up.

My way of dealing with it back then was to try and cover it up, which is not a good approach by the way. Afterwards though I quickly turned it into a learning experience trying to figure out why I messed up, what I could learn from it and what positive changes actually came out of it. I think that's my general approach to failures, accept what happened, accept I can't undo anything, try to learn as much as possible and move on.

Also, in my blog post from yesterday I spoke about my history of bad presentations.

Peter Walen: What is it you have done that you kind of wished you hadn't done

I don't regret anything I've done to myself or others as long as I or whoever I did something to, feel happy about their current situation. And if someone else feels bad I try to focus on what I can do to improve their situation rather than blame myself for something I can't change anyway. So far that strategy has worked out.

The story that still bugs me the most, that I can think about now, is when I was caught lying in like 5th grade. It was rather innocent, our teacher asked us about the temperature where we lived (don't ask me why). I lived far out on the countryside and we quite often had a degree or two colder than in the city. So I strategically grabbed a chair so that I would be asked last and always answered 1-3 degrees lower than everybody else. One day a classmate outsmarted me though and answered a ridiculously low temperature. I didn't notice the temperature he suggested was like ten degrees lower than everyone else's so I just answered like two degrees lower than his... and suddenly everyone started to laugh.

But like all the other stories that taught me a valuable lesson so I've come to terms with it... still hurts a tiny bit to admit though...

... From a testing perspective I can't really think of any... maybe that I didn't start to care about testing earlier but once again, my current position rocks so can't really feel bad about that either.

Simon Schrijver: How do you rebound from mistakes
I (and Simon himself!) answered that during the presentation but please check out the previous answers for some additional details.

Additional credits

Pekka Marjamäki
The kind and brilliant tester who offered me coaching before RST.
He will be speaking at EuroSTAR! Check him out!

Henrik Emilsson
Henrik provided the invaluable feedback that turned my "not so amazing Lightning Talk" into a kick ass learning experience.

Johan Jonasson
The person who helped me find my local test community, helped me survive my first peer conference and was part of founding Let's Test, what's not to like about him .)

Pierre Rasmussen
Mentioned during open season as the friend who shared his weekly reflections on Twitter. I have been too inactive for too long to say if he still does but no matter what he's a brilliant test-infected programmer worth checking out.

02 September 2013

My success story, a story about failures

I have a lot of things to say about CAST, to people who would like to speak at a test conference, about personal development and about my own presentation. But first a few stories I want to share, hopefully busting any ideas about "some people are born to speak while others aren't".

Messing up a play
My first memory of something presenter-like was in 4th grade. 4th, 5th and 6th graders were mixed together in drama groups and something like once a month each group performed a play for the other groups. My group's first play was ruined. How? Well I couldn't shut up. I interrupted the other actors, screwed up jokes by explaining even the most obvious ones and was too nervous to remember any of my lines giving everyone else a hard time... The self elected leader of the group (a 6th grader) was furious when all of us gathered afterwards to discuss the play.

My first presentation
Moving on to 7th grade and the first time I presented to a larger group of people (~30). For this presentation each of us were given a famous author to talk about, mine was Charles Dickens. I was so nervous, but I had rehearsed a lot and prepared a rather detailed script so everything should be fine. I went up on stage, took a couple of deep breaths and said... Blaghl...bl..br, which unfortunately doesn't make any more sense in Swedish. I couldn't form a single word! I couldn't even say my name! I tried several times with every failed attempt just inducing more and more panic. Finally I threw away the script and just relied on my memory, which was no problem at all since I had rehearsed sooo much. When the torture was finally over several class mates came up and said: Wow, you were really calm when presenting... I was still shaking.

At the university
I volunteered to become a university ambassador. One part of that job was to speak to groups of high school students. One time the group was rather large, about a hundred people. I froze! I froze for something like 30 seconds (felt like 10 minutes) before I started and when I finally did I think I forgot to tell them like half the stuff I was expected to present.

Improving legacy code
Moving on a few years, now as an employed software tester. I had suggested a very vague idea, basically saying we had to work with the quality of our legacy code. So I was asked to present my ideas to the rest of the department. To make matters worse I got sick a few days before and due to that I had basically no time to prepare myself. I also knew that the material was way too abstract due to the fact I didn't really understand the details. Any professional would say: "I'm very sorry but I'm not ready to give this presentation" (a question I was even asked due to my sickness)... I didn't. The result?
"We need to work with the quality of our legacy code. A few things I've thought of izzzz...", what happened after that is a bit blurry but apparently one of the managers saw I was about to pass out and caught me before I fell, while someone else grabbed a chair. I got some water and actually finished the talk sitting down, still a bit dizzy... it was a memorable talk but not for the reasons I would have liked.

CAST warmup
Let's finish with a much less dramatic story. Just a few months before CAST I presented at a local test meetup in Malmö, Sweden. I would give a short talk on security testing. I felt calm until the moment I sat down in the room. After that anxiety rapidly (and for me surprisingly) built up. I think one of the main reasons was I simply hadn't rehearsed my talk. When Henrik Andersson offered us a beer I quickly grabbed one and it was perfect to ease the anxiety (something I haven't told him). However, knowing I "needed" a beer during one of my last public presentations before CAST was not really... comforting.

CAST 2013
Let's save this one for another post but short story is it went great!

Why am I telling you this?
Long ago I had the misconception that speakers at, for instance, conferences were natural talents who just had something I didn't possess. A misconception with an interesting twist as today I often hear people call me a naturally talented speaker. What I've learned, and hope my stories help you see as well, is that that something is mostly hard work. If you decide you want to learn to present there's no exotic gene stopping you.

Why am I telling you this... 2?
You might look at the stories I shared and say: "Why not focus on your successes?", but here's something cool: The stories above are my most important events as a speaker. Without them I wouldn't be in the position I am today as a speaker and if I sound insane, let me give you some examples:

Messing up a play
  • I need to sometimes stop myself and simply shut up.
  • I need to think about what's interesting to the listener, not just what I want to say.
  • Being well prepared (rehears) is key!
My first presentation
  • I can actually present.
  • I don't need a script.
  • Even the worst case scenario wasn't that bad (you could argue that passing out is worse than not being able to say a word but they're pretty close).
  • There are things I can't seem to learn/fully understand without actually failing first.
  • There's no such thing as a failure or success, we always fail to some degree and we miss out on a lot of potential success if we don't take the opportunity to learn from these small or big failures.
  • As long as I rehears, things seem to work out okay no matter what.
At the university
  • Silence is actually not that bad.
  • Less is more, I forgot a lot of the prepared material but in the end that seemed to make what I said, stick better (based on reactions from students after the presentation).
  • You can turn a bad start or problematic presentation into something great, it's never too late. This one actually turned out as one of my better presentations as an ambassador.
Improving legacy code
  • Doctors didn't find any problems with my heart, lungs or head (obviously physical health was enough). That's actually quite comforting.
  • Due to all the medical tests I had to leave a whole bunch of blood samples which eased my fear of needles and hospitals.
  • ... and that taught me a valuable lesson about fear: Fear is much about not knowing the outcome, not about the experience itself. Understand that and a whole lot of things stops being scary (very, very powerful insight).
  • I know my limitations better.
  • I know when to say no better.
  • I know the possible consequences which further helps me say no when I really should.
  • I've learned the value of understanding the content I'm about to present.
  • I need to rehears.
  • I need to rehears.
  • I need to rehears.
  • I've learned code quality is much more complex than I once thought it was.
CAST warmup
  • I need to rehears
  • I need to rehears
  • I need to rehears
  • Open season with K-cards is actually not that scary (I had not tried that as a speaker before)
  • Beer solves a lot of problems... kinda.

You can do it!
If you feel like presenting at a major conference would be cool but you hesitate since you're not a "natural speaker", Think again!

Practice
At work, local meetups, small conferences, at home or maybe checkout Toastmasters International (Credit to whoever gave the lightning talk about Toastmasters at CAST, just found out we have a local club where I live so I'll give it a try!).

Challenge yourself
Try new things, get out and speak in front of people, record yourself, test various formats, present without slides, try drama or stand-up, present in front of more people, present in a non-native language...

Wrap up
I think I'm a talented speaker today, and that's not just based on my CAST performance. But it has little to do with my amazing genes, it has to do with practice and challenging my limits. I'm pretty sure all your favorite speakers have similar stories (or at least I hope) and that their biggest secret too is practice.

So don't wait to become a great speaker, act to become a great speaker!

Oh, I forgot!
A key inspiration for this post as well as one of the real highlights for me during CAST, was Dawn Haynes' keynote. Make sure you check it out!

Oh, I forgot... 2!
During open season, several people asked me about my failures and how I dealt with them. Hopefully this post answers some of those questions. If not, please ask your question again (Peter, Simon and Jonathan were the ones I remember, did I miss anyone?).

Thank you for your time and good luck with your presentations!

17 August 2013

CAST, A sneak peek

In my last post I promised to release a sneak peek of my CAST presentation. Even though I'm infamous for being too optimistic and missing deadlines, that promise will actually hold true.

So... what will I talk about?

The story
In May 2012 the first ever Let's Test conference was held.

When I heard about this conference it immediately grabbed my attention. It was just a train ride away, the tester's I wanted to meet would be there and the general setup felt spot on for me.

So I prepared myself to just bash into my manager's office and demand he sent me to this amazing conference but... that never happened.

Instead I convinced myself, I wasn't ready.

Just over a year has passed but in just a few days I will present at CAST; an opportunity I got thanks to writing an abstract just 8 months after missing Let's Test. Today I am the tester, I before Let's Test told others I wanted to be but never dared to become... All that is quite a transformation and how that transformation could happen is what I will walk you through.

Actual content
There are of course a million things contributing to this change, many of which are highly context dependent. But I will highlight four that I think have been key, and try to extract as much practical and useful content out of those as I can. I will actually not list them here but in my 2012 reflections, you can see all of them in action and some of them described. The big difference between that blog series and this presentation is I won't focus on what I actually did (like reading blogs, meet testers, change job etc.) but rather the mechanics that made them happen. To explain that I need an example so even though I said I wouldn't, here is one of the four "chapter": My skill development list. In the presentation I will explain the list in great detail including what made it work, what problems I encountered and how I dealt with them.

I want more!
Okey, okey, that wasn't very revealing but let me give you something that is: Here is my latest skill development list and personal manifesto. In common they have cliché names and that both are very much relevant to this presentation.

Also I can tell you there will be an exercise...

... You will be asked to answer several questions (to yourself)...

... and you will see this guy in many shapes and forms:

Thanks to IBM and Jojo Mendoza for the original pictures that made this guy!

If that's not revealing enough you'll simply have to come and watch my presentation: Making learning my top priority, at CAST.

See you there!

25 July 2013

CAST, The story about an abstract

The Challenge
It was late in January. I was spending the Sunday with my kids and fiance visiting my mother. As we were eating my phone did that annoying sound it does when I get a message on Twitter. I ignored it and kept enjoying my visit. Little did I know that the annoying sound was the start of one of my biggest adventures so far.

Anyway, later that evening I finally checked my phone and found this tweet from my, soon to become, manager:


The first thought that ran through my head?
"Yeah right! Like I'm good enough to do that"
When it hit me, I had had the same feeling about Let's Test, 8 months earlier and still regret losing that opportunity and 3 months earlier when I was invited to SWET 4 but that time I accepted the invitation and loved every moment. This was sure a much greater adventure but why not give it a shot?


Getting started
When Louise Perold said "not much time" she wasn't laying. Three days to finish my first ever abstract for a conference talk, add kids to that and I was down to 3 evenings... and I didn't even have a topic.

As the theme for CAST was "lessons learned" I figured I at least had a shot since my career so far was much more about experimenting than reading stuff. After some thinking I came down to three potential topics:
  • Note taking - Based on my blog post and further experiences
  • Going exploratory - Me and my colleague at Ericsson turned a factory school test processes into a context driven approach in our team. It was an amazing experience with some really interesting results. I will share those some day (you can read a tiny bit here).
  • Mindset - I had went from knowing nothing about testing to learn about testing and finally excel to become a tester with ideas others seemed to put value in. What made that happen?
The first one felt like something others would pull off much better (and after attending Huib's and Jean-Paul's tutorial at Let's Test I'm glad I didn't pick that topic). I could have focused on the experimental part but that thought never appeared to me.

The second one felt a bit... well, with me soon leaving Ericsson to join Maria at Verisure it just didn't feel right.

So I went with number three.

Writing abstracts
I have a topic, now let's write an abstract about it... what should be in an abstract? I scratched my head and started browsing around looking at various abstracts. Finally I just shrugged and started writing.

I sent my first draft to Maria for review. We both felt it didn't say anything about the actual content in the presentation so I started on my next one... two days left.

During day two I managed to produce two different abstracts, both having the same basic problem: They described a topic way too big to fit in a 20-40 minutes slot. So with one evening left it was back to the drawing board to either focus on some key detail in my current topic or find a completely new one.

Delivery time
I ended up focusing on a smaller but reoccurring theme in my previous abstracts, that by the way was about the mindsets described in my The year I became a passionate tester -reflections. I wrote and wrote finishing late in the night, making it a completely new abstract that Maria never got to review. As I pressed "send" in the submission form I didn't know what to think. On one hand I was really proud of my abstract, to me it was interesting and well written. On the other hand I felt it was probably having similar problems as my previous abstracts I just hadn't realized it yet.

A first hint
Maria told me she had been asked if she thought I would be able to pull off the presentation (I actually have a background as a speaker but in a completely different area). That question was an early indication that my abstract was at least considered. Suddenly things became real, I might be standing in front of a crowd six months later giving a presentation at a major test conference on the other side of this planet.

Announcements
Finally the day came when presenters would be announced. I had mixed feelings, one part of me screamed "Don't you dare picking me" while the other wanted nothing else in the world than seeing that email in my inbox. The first people started announcing they would be speaking at CAST. After a while it became apparent no such mail was sent to me and I just felt... emptiness.

A couple of days passed. Even though I should be proud of myself for fighting the urge not to write, and later send, the abstract, I still just felt emptiness.

Spam
Two days after the announcements I cleaned up my spam folder. Usually I just press "delete all" without even looking at the content (yes, I once did trust Gmail that much). For some reason I didn't this time. When glancing the list I noticed a mail close the top:

Dear Erik,

Thank you so much for your submission for CAST2013. We are delighted to inform you that you have been selected to speak. Details will follow shortly around the next steps.

Best regards,
Louise Perold and Ben Kelly

Hey Google, that's a really ugly trick! Don't you ever do that again or you'll end up like Yahoo after spam marking mails sent to myself.

So what's now
I've been working on my presentation for quite a while now. Not too long ago though I finally asked myself: "What's the goal? What do I want people to feel/know/do when they leave my presentation". That question forced me to somewhat start over but it's coming together nicely and will be great in August.

The presentation? Well it's named Making learning my top priority and you can read more about it in the CAST schedule. The plan is to post one more blog post before CAST and in that provide a sneak peak of what I will talk about but that's for another day. For now, thank you for reading and I hope to see you in Madison later this summer!

... Oh, and in case you wonder about my promise to my finance: Yes, she and our (now) six months old girl with come with me.

Take care!

03 July 2013

Observation Exercise / Experiment

Intro
After attending Ilari's tutorial at Let's Test I decided to write a blog post, with the goal to put what he taught us into my own context as a tester. That post is still in a raw draft state but in the meantime, on a similar topic, I want to share a simple observation exercise/experiment I've conducted.

The experiment
For one week I decided to "actively observe" during my 15 minute walks between job and home. I had no idea what "actively observe" would mean but I figured that would be clear as I tried it (which it did).

First walk
I left my house ready to observe. I started to search broadly just sweeping over my surroundings trying to find interesting stuff. I did this pretty much the whole way, finding nothing. That was a disappointment.

Second walk
On my walk home I decided to stop focusing on finding interesting things instead I simply tried to focus on things and see what came out of it. Bingo! Suddenly I stopped as two cemetery gardeners was cutting the grass with big scissors to get a perfectly straight border between the walking path and the lawn. This made me look at the flowers and suddenly I realized how symmetrically they were arranged. This continued as I started to notice interesting grave stones, grave stones I had missed during my first walk when trying to "observing everything".

Rest of the walks
For each day I seemed to lower my walking tempo slightly and I slowly went back to the sweeping strategy except I didn't just try to get an oversight of everything, instead I swept in the sense of rapidly switched focus from one thing to another.

Observations
  • Initially my goal was to find interesting stuff (sweep) but that quickly changed to general curiosity (focus). In the end I didn't care about what I would find, the search in itself was way more interesting.

  • For each day I seemed to walk a little slower. Since I didn't clock myself (which would probably bias me too badly to render any useful results anyway) I can't of course be sure, but it felt like I did.

  • In the beginning I seemed scared of missing something important. That basically made me miss everything.

  • When I swept over stuff the only thing I reacted to was very out of place things (which rarely was interesting to discover/observe) or motion. To change this I had to focus specifically on certain details, living with the fact that I probably missed a ton of things.

  • After a few days I started to look forward to the actual walks, they became a somewhat spiritual exercise. Which isn't strange as it reminds me of many mindfulness and meditation practices I've come in contact with.

  • I think I've started doing this kind of "active observations" in other places as well somewhat unconsciously (just noticed it the other day). This would be quite a significant change since I'm very easily distracted and absentminded, I don't really focus on my surroundings.

  • Being tired (two mornings) or annoyed (one afternoon) significantly limited my ability to observe. This is nothing surprising but still interesting to realize for myself.

  • To be able to ignore moving objects when observing stationary objects takes practice, which is also no surprise but once again interesting to realize for myself.

  • The more I practiced the more I figured out what was interesting for me to observe. In the beginning it felt like I just looked at things for the sake of it, but after a while I started to ask questions and build genuine (I feel) curiosity... I also learned it'll take some time to really master this, my 10x15mins only made me realize I have a lot to learn more or less.
Conclusion
Will this make me a better tester? I don't know and I can't think of any way to measure it and even if I could, bias would probably ruin the measurements. But my guess is that some of these lessons and the practices will help, to what extent I don't know though. Examples could be to practice ignoring movement as I focus on stationary details or the need to focus on specific details rather than sweep. All and all it will at least not make me a worse tester (I hope .)

One interesting question after these kind of experiments... will I do this next week, in a month or a year? I have no idea but I hope so. Just like with training it right now feels like it's a sure thing but give it a few stressed days and I might have lost track, we'll see. No matter what; I've learned some interesting things during this week!

20 June 2013

Experience Report: Pairing with a Programmer

Background
It was one of those typical "do or die" scenarios: A feature was implemented that was critical for the project but it was delivered late due to slightly optimistic estimates. To make matters worse it touched on rather fundamental parts of the product making it somewhat complicated to roll back. The job was to quickly test, fix bugs and, in corporation with our product owner, decide if the feature was ready to be released. The feature in this case was rather technical with a simple interface that, depending on configuration, triggered tons of background activity including communication between different servers.

What we did
Me and the programmer teamed up. The first day we worked in front of his computer only. Initially he pretty much explained the feature and I noted down some thoughts and scenarios that popped up in my head. When done we started with the most straight forward scenarios and immediately found some serious bugs that required extensive investigation and experimentation to fully understand. We managed to solve the most critical ones and left not that late feeling really good about ourselves. Day two would be a breeze!

Day two we made an "upgrade" and booked a room as well as brought one computer each. Immediately we found out we had badly underestimated the work left as critical bug after critical bug emerged. As some of them required knowledge we lacked we called in a domain expert to help us and continued to work all three together for the rest of the day (one computer each). Day two turned into a 12 hour long exploration and bug bashing. When done we felt slightly exhausted but proud, it was working!

Some info about the group
I'm a decent programmer and a good tester. The programmer in this case is fairly young but good programmer progressively becoming a better tester. Finally the domain expert is a bit of everything. He knows the system and customers as well as having a fairly good understanding of both programming and testing.

Observations
  • We had fun! Personally I liked day one best, not because the presure was somewhat lower but because we cooperated more when sharing one computer.

  • Continuing on the track of computers: I think from an efficiency standpoint one computer each was, short term, mostly more efficient (possible exception would be during the most intense and mind challenging investigations). It's important to note that one computer each didn't mean we just sat together testing individually, what we did was to work together on one task but doing different things (like taking notes, executing tests, monitoring logs, correcting code etc.). Working together at one computer though was a more learning experience and, as I said before, I think it was more fun if that counts for anythings.

  • I feel very comfortable saying we would not had caught and fixed anywhere near the amount of bugs we did if working separately. A few reasons why:
    • We have different investigation methods. The programmer did low level investigations really well adding debug printouts, investigating code etc. while I did high level investigations really well checking general patterns, running additional scenarios etc. Not only did this make us avoid getting stuck by changing "method" but also, my high level investigations benefited from his low level additions and vice versa.
    • We could share some of the work, for instance I took notes as the programmer ran scenarios. This helped us avoid losing momentum.
    • No time was wasted performing handovers from bug reporter to bug fixer back to the reporter.

  • When we worked all three together I felt we often worked in one pair and one person alone (not all the time but most of it). In this case it was probably the right thing to do as we required all three persons' skills but I think that was an exception. My general judgement, only based on my own experiences, is there is little or no gain in being three compared to two if you don't really require everyone's skillset more or less constantly.

  • We learned so much!:
    • All three learned new things about programming, the system in general and testing, both from each other and from ourselves explaining stuff
    • I learned more about logging in general
    • I learned more about the system and feature
    • I learned more about the code/code architecture
    • I learned more about the others' strengths and weaknesses
    • The programmer got help being skeptic about his own code
    • The programmer learned some new scenarios his implementations need to handle

  • We got to know each other's personalities and skills better which fosters trust/respect/team spirit.

  • It made us communicate more which helped us gain from everyone's observations (sometimes an observation was communicated that didn't mean anything to the one making it but did to someone else).

  • Pairing also made us stay focused for longer stretches (you don't want to be the first one to procrastinate).

  • A byproduct was code review. As we investigated code sections other improvements/problems/potential problems became apparent and fixed.

  • One thought that popped up was working separately and then pair as problems occurred. However, that would require time to be spent rerunning the actions, explaining the steps and possibly missing crucial details from the initial run. All and all I think there's a place for that kind of approach as well but not that often.
Sum up
I did pair up with programmers quite a few times at Ericsson and found it useful but for some reason I haven't done it on my current job until now. I will definitely continue doing it though, for the learning experience, for the laughs and for the (at least perceived) efficiency gain. Luckily it seems I wasn't the only one feeling that way!

11 June 2013

Reflections and feedback request on communication

This is a post in which I will mostly ask for your help rather than me trying to help you. If that's not what you're after, feel free to skip to "Improving my communication" further down.

So what I would like from those of you still reading is any:
  • Exercises
  • Experiences
  • Advice
  • Helpful ideas
  • Free speculation
  • Things to look into
  • Things to think about
  • Other stuff you think might be helpful/useful
... related to what I'm about to describe. I leave it up to you to decide how to feedback but some suggestions are as comments to this post, email (brickuz(a)gmail.com), Twitter (@brickuz) or Skype (brickuz). Thank you so much for taking your time, I value this tremendously!

Disclaimer
I will talk about drifting as something bad. In many cases following thoughts to explore new ideas are  (I think) a great way to learn but the drifting I refer to is drifting away from a point I'm trying to make ending up with me not coming to any point.

I will also talk about communication only as verbal communication that is not (suppose to be) one-way.

Slowing down my tempo
I speak very fast, something I've done for as long as I can remember. The more excited I get the quicker I speak but it's an issue long before I get excited.

Problems: Sometimes people find it hard to pick up what I'm saying, it creates a barrage of words which makes people lose interest, it (I think) affects my credibility and it makes people lose track making it hard to get what I'm trying to say even when they stay interested and hear the words.

So far I've tried to actively lower my tempo but I quickly forget about it when discussions get intense or when I'm excited about the topic (as a passionate person that happens constantly). I've read before that it may be connected to stress, which is probably a factor and I'm trying to work on that as well, but I doubt it's the main reason. After all I've done this ever since I was a kid.

Lowering my voice
I often talk loud.

Problems: It can make it somewhat uncomfortable to listen to me for long stretches and it leaves a bad impression, especially when it's used to drown out other speakers (unfortunately not uncommon). The fact that I have a somewhat high pitched voice makes it worse I think.

I don't have perfect hearing but I also don't think that really affects this. I would rather say it's a bad habit I haven't learned to control. It also seems there's a connection between me raising my volume and my tempo which makes me believe there is something else that is the main problem.

Come to the point
I often have a point I want to make but I tend to drift off, simply because something pops up in my head and it immediately turns into words. A special case of this is when I throw out a claim and start reasoning around it (is it really true or not) as I speak. I do this in a way that, I don't think, is really helping anyone. Finally a third variant is I associate something said to something in my head giving me a starting point but not having any point to move towards. Even though this is the case I start talking and often end up in a situation there I'm desperately searching for a point to make AKA rambling.


Problems: A very common end result is lots of words, few relevant points which in terms makes people lose interest.

I try, but once again often lose the ability to when excited, to stop myself and just end my rambling as soon as I realize I've forgotten where I'm heading.

Stop interrupting people
I often interrupt the person speaking because I have a point, association or addition that fits in exactly at where we're at.

Problems: Apart from being rude/giving a bad impression, I make the speaker lose track/miss out on valuable information and it seems that interrupting others also makes it more okey to interrupt me more often.

My greatest ally in this so far is a well developed empathy. Thanks to it, "something" stops me and says: "if I speak now I'll likely hurt the person speaking's feelings, or at least rob him of the chance to shine"... not remotely close to stop me often enough though.

How did I come to these conclusions?
  • Stopping and reflecting on what just happened after I've gone bananas in conversations
  • I'm blessed with some really smart friends who care enough for me to give feedback
  • Observations of how people react to/act after I've spoken/rambled.
  • Reflecting on things said by people who don't like me.
Isn't this "just the person I am"?
I believe we all have various characteristics, like being impatient or humble. These provide us with some ups and some downs. I'm quite satisfied with the characteristics I have so I'm not really trying to "take the me out of my communication" but... it's also a lot about skill. Sure I get impatient or excited and that affects my communication, but on top of that there's a lot of skill I want to improve.

Why does this happen
Apart from impatience and excitement another thing that seems to "passively" affect my communication is fear. First it's the fear of people not letting me into a conversation. This, based on my own observations/reflections, makes me interrupt more often as well as keep talking without letting other people break in. The fear of not coming back into the conversation also gets worse the longer I go as I know people will hesitate to let me in again, for good reasons, when I stop. This is of course very counterproductive as people will stop listening at some point. Reading what I just wrote actually sparks some thoughts, we'll see where that ends up.

Second is the fear of forgetting about something that, important point, in the moment seems incredibly important (not so much when I say it though). I try to battle this by bringing a notebook but so far it hasn't improved the situation much.

Related to these fears there seems to be a mismatch between how I evaluate the "wow factor" of a thought inside my head compared to how it comes out. I don't think this is based on poor construction of the message, rather that I exaggerate the importance in my head. I think this mismatch is an important factor for me to generate a constant flow of ideas to explore so the problem is probably not the mismatch itself but rather that nothing stops my brain from communicating all those thoughts as soon as they pop up.

Improving my communication
In general, to battle these problems, I try to remind myself that people will only remember a few things I say (in general at least) so I better make sure that what I want them to remember is what's in focus rather than having it mixed up with a million other words and thoughts.

I try to stay aware of my though process and create a small gap between thinking a though and speaking the thought so that I can quickly make a rough estimate of: "is there a point relevant for this conversation in what I'm about to say".

I try to think more egotistically: When listening I generally learn more than when speaking. This way of reasoning has also made me use questions more often to still bring the conversations in directions I find interesting. One byproduct of doing this is I tend to communicate my own messages a lot more efficiently when baking it into this "question driven approach". Questions also help me stay focused on the current topic rather than drifting away. I thank Peksi and his exercise, as well as RST with James, for helping me learn how to do this.

Taking time to reflect, like in this blog post, helps me to become more aware of what I should think about/practice when speaking. Just getting this far has made me more aware of the silliness in many of my bad habits and ways of handle fear.

Moving on
Apart from putting my trust in some great things coming from those of you who have read this (grateful once again), one thing I've realized during these reflections is I should record my conversations and try to learn from listening to myself more often. Apart from learning more about my patterns I hope it'll make me better aware of how/when/why my speedy rambling kills any substance within.

Also I need to go into "practice mode" more often, observing myself as I speak and try to tweak the why/when/how/how much I talk instead of keeping my head busy with figuring out more things to say.

... I should do this kind of reflections more often it really generates interesting thoughts!

Wrap up
Thank you for surviving this rather long blog post. Please share your experiences and ideas regarding:
  • Speaking slower
  • Speaking more quiet
  • Getting to the point
  • Stop interrupting people
I really appreciate any input I can get!

(I keep coming back to the idea that I maybe should release a recording of me rambling... well, let's save that for another time .)