26 May 2016

House of Test

Earlier this month I left House of Test. It was not because I disliked the company but because I was offered an amazing opportunity at another company I obviously love.

Since so many have asked me how it is to work for House of Test I hope giving my view, as an ex-employee, will shed some lights on what "the House" really is.

Consultancy is still consultancy
The first thing to make clear is that House of Test is still "just" a consultant company. Money doesn't magically materialize and as an employee you're expected to actively work to find or create new working opportunities and help with marketing the company. Also far from all assignments are glamorous and you won't actually meet your House of Test colleagues that often.

Growing pains
House of Test is also growing quite a bit right now which slowly lessens some of the challenges being a small consultant company (e.g. less pressure on each consultant to help market and hunt for assignments). But it also introduce new challenges related to e.g. communication and each employee's influence.

As with any growth there are growing pains as everyone tries to adjust to the new environment, nothing strange but also nothing magically gone because it's House of Test. One example would be finding a good format to better facilitate the growing numbers at the company's gatherings.

"I'm not good enough for House of Test"
The reason people ask about how it is to work for House of Test is because House of Test is pretty famous within the (context driven) testing community. That in terms creates an interesting challenge (which is also partly why I write this post):

Great people not thinking they are good enough to work for House of Test.

This might sound silly to some but it's been a serious recruitment issue for House of Test and only since I joined two years ago I've heard "I don't think I'm good enough" more than a couple of times from great testers I know.

The truth: House of Test is filled with passionate testers, many of which have made a name for themselves by speaking at conferences (we'll get back to this), doing well in testing challenges or being otherwise active in the testing community. The common denominator is not godlike testing skills however; it's passion and a willingness to learn. Many of the testers in House of Test started fairly junior but has grown for reasons I will later explain and even so, many of the current ones are still fairly early in their development.

... meaning you're good enough, just to make that clear.

Education as a focal point
The one thing I think differentiates House of Test is their view on education and learning.

Internally conferences, courses and other learning activities are probably the single most discussed topic and call for paper reminders are posted for basically every (major) testing conference on the planet. This is reinforced by management in many ways, one being that I don't think I've ever heard them say e.g. "you would miss too much time from your assignment" or "the timing is bad" when someone requests to go to a conference or course... of course there's a maximum budget for education but that budget is also through the roof compared to any other company I've been in contact with.

Another interesting thing is how the top end education is preferred over the cheaper, local alternatives. Many hotties (myself included) have went to, for instance, PSL in Albuquerque, CAST in various places in the US, RST no matter where it's available and Let's Test, simply because they are considered to be the best available. This focus on quality rather than cost is something I, now being an ex-hottie, will definitely bring with me.

On the flip side there's an expectation to want all this. It might go to a level of unhelpful pressure to send in abstracts or attend learning activities (including after work hours in some cases).

One final clarification though: A problem I've experienced in other companies is the pressure to immediately explain and demonstrate what you've learned from a course or conference. That doesn't exist at House of Test (my experience). The founders seem to understand/trust that it often takes plenty of time and/or specific contexts to fully grasp the value and that education is a long term investment in general.

To summarize:
My view is hotties improve their testing skills unusually quick simply because all the best tools to do so are introduced and available to them. For some this may create an unhelpful feeling of pressure to improve though.

Communication
Being spread out in "four and a half countries" (the half country is actually not Denmark; it's the globetrotters) combined with being rather small poses a major communication challenge. This works fairly well though...

House of Test has an active Slack chat, quarterly gatherings and at basically any major or local test conference you'll meet colleagues.

The gatherings themselves are worth mentioning as they are one of the greatest perks I've experienced. Imagine having a small conference packed with people like Maria Kedemo, Ilari Henrik Aergerter or Carsten Feilberg every three months. That's a pretty awesome employee benefit in itself! I don't know how the format or dynamics in these gatherings will be impacted by the company's growth but during my time they were simply amazing.

Is House of Test for you?
If you like to work with highly skilled professionals, having strong opinions and a willingness to debate basically anything, you will feel right at home. A willingness to learn is definitely required to function well and an interest to, at some point, stand on a stage and share your experiences, helps too but is not necessary.

If you just want a quiet 9 to 5 job, have troubles dealing with consultancy in general or disagree with the context driven testing principles you probably have less to gain from joining the House.

... that's the simplified explanation.

Summary
On one hand House of Test is just an ordinary consultant firm with, for instance, the need to hunt for assignments, the risk of ending up in less than optimal work places and distance to your closest colleagues.

On the other hand House of Test is like a wet dream for passionate testers; you will work with some of the best testers (and testing teachers) in the world, education will be a focal point and anything awesome happening in testing will be introduced to you the moment it becomes public... or often before.

Thank you House of Test for two awesome years!

11 April 2016

Time estimates in testing, part 1

Why this blog post

Activity planning and time estimation in testing is complex to begin with; combining that with cross-functional teams (having to actively take other's planning into consideration) and I've seen a lot of confusion. In this blog post I hope I can arm you with some useful ideas on how to deal with activity planning and time estimates in testing.

... and that turned out to be a lot bigger task than I anticipated so in this first part I will speak about the more overall concepts and in the second part I'll provide more concrete explanations of each approach (see the graphic below).

Key attributes

When I started listing various ways I've been working with estimates I continually came back to two variables that seemed central:
  1. To what degree does testers and developer share their planning?
  2. How much trust and effort is put into time estimates?
For this reason I made this little graphic:


Do note that this is still thoughts in process so I do appreciate any feedback on whether this makes sense or not.

Separate planning vs Shared planning

I've been in teams where testers were only or almost only expected to test what had just been implemented (as in testing the exact work described in the implementation stories thus sharing the planning with developers) as well as teams where testers were expected to create their own planning based on the testing needs; both extremes come with their own benefits and drawbacks:

Shared planning:
 + less administration (as in cheaper)
 + closer relationsship between testers and developers
 + quick feedback loops in general
 + easier to get focus on testability improvements in the code

Separate planning
 + more focus put on regression, system (e.g. consistency) and integration testing
 + easier to get work only affecting testers prioritized
 + planning can be optimized to fit testing

Two dangerous mind traps I've experienced:

Shared planning
Risk: Testing seen as something very simplistic where you just test the product's new features with no or little regard to the big picture.

Negative effect: General testing activities, such as overall performance testing, integration testing or regression testing are down-prioritized.

Implications: Bugs related to e.g. integration, unforeseen legacy impacts or product inconsistency.
Coping: To deal with this it takes testers or other team members who are good at advocating for testing in general and can motivate why certain test activities makes sense even though obviously impacting code is not changed.


Separated planning
Risk: Testers and developers move apart and create two separated subteams within the team.

Negative effect: Impacts information sharing, understanding of the other's work (which in terms may impact the respect for one another's profession) and developer's losing the feeling of responsibility for quality.

Implications: Prolonged feedback loop (code developed to bug reported to bug fixed to fix verified), worse quality on code handed over from development, testers lacking important information making them less effective/making them take bad decisions.
Coping: Well functioning team and social management, testers who're good at communicating their work and good at speaking with developers on the developers terms or vice versa, developers putting a great interest in testers and what they do.

I'll elaborate a bit more on this in part 2.

Effort put into estimates

Let's talk about the two extremes first.

When we put a lot of effort into estimates we may for instance split the work into small tasks to make them easier to estimate and try to give each an as exact time estimate as possible; e.g. "these 6 tasks are estimated to 3, 4, 6, 3, 1 and 9 hours so the time to test this feature should be around 26 hours".

Little effort are put into time estimates when we for instance accept an external time constrain, e.g. "get the testing ready for the release" or "finish your testing within this sprint" and we more inform the one setting the time constrain what we roughly think the quality of the testing will be at this point.

Important: "Quality of the testing" in this case refers to how deep/well we expect we will be able to cover the various features. This is not the same as the "quality of the product". Why the latter is not what is (typically) talked about is because that oe is way too complex to estimate and often out of what we can actively influence. For instance it's not our call if a bug we've identified should actually be fixed, that's up to the stakeholder(s).

At this point the former might seem more reasonable but it's not that simple...

Accuracy of estimates

There's no "done" in testing. This is sort of true for development as well, we can always continue to improve the code itself or polish the actual feature, but at least we can observe the product and say "this is good enough" and, important, that state of "done" can be fairly well-defined/described.

Even though some try to pretend it's not true; testing does not at all work like this.

Testing happens inside you head, development too (to a large degree); However, the product of testing also stays in your head and is relying on your ability to communicate it while the product of development can be easily observed by others. This complicates estimates since we cannot keep discussions around a fairly well defined end product; instead we have to leave it to everyone's interpretation of the task and do our best to communicate our own interpretation so we roughly estimate the same thing at least. For this to work at all, tester's first need consciousness of what they actually do, the necessary communication skills to explain this to others and the receivers must have enough understanding to interpret this message correctly. No human on earth (as far as I know) do even one of these things flawlessly (aka: communication is hard).

With no well defined "done" everyone will have to rely on their interpretations of what they think needs to be done and what they think the receiver asks for. That in terms will impact the estimate's accuracy but this's just part of the problem...

... the other part

On top of estimating a task we cannot clearly define we also have to estimate something that is inherently plagued with great levels of uncertainty:

The time it’ll take to test something depends on external factors such as the quality of the code, the complexity of the chosen design (which is often not set when estimates are done), the stability (up-time and robustness) of necessary environments, the help we'll get to correct problems found/blocking etc. To continue the reasoning the quality of the code depends on things such as how hard something is to implement, the skill level of the developer(s), how stressed they are (if a developer has a tight schedule she can typically stress the design a bit to get it out the door in time which in terms affect the quality which in terms affect the effort required to reach the abstract "done state" when testing) and stress level depends on even more factors. I can go on but let’s just say the time it takes to test something to the “abstract, hard to reasonably accurately define, level we desire” depends on so many external factors that the estimate's uncertainty is huge.

Bullshit!

I once told a project manager this and he replied: "Bullshit! My testers ace their estimates almost every time!" (in a product with pretty crappy quality and expensive testing should be added). And here comes a conflicting truth:

We can't predict when we'll be "done" but we can be "done" by Wednesday if you want.

In development we need to produce some kind of agreed upon product based on, at least, our explicit requirements. Remember how testing did not have a defined "done"; well taken to its extreme we can (almost) say we're done at any point in time since it's unclear what done is anyhow: "This is the findings we've made so far (and are able to communicate). You told us to be "done" until today so we are "done" but our professional judgement is this needs more testing as the following areas are still considered unstable or only superficially tested...". In a sense we always do this, estimates or not, but don't even spend time trying to guess in the latter case, instead we do our best to put us in a good enough state when the time is up.

... and before someone talks about bad attitude or lack of professionalism; this is of course not how it's practically done. Some level of estimation is always performed but rather than saying: "We think this will take 10 hours, this 5 hours, this 2 hours and this 8 hours." we might say for instance "We think we're on track for the release but the quality at this point is unusually low so we're a bit worried this might change".

Planning based on time or effort

This topic is too big to cover in this post but a quick rundown.

When planning our testing we can either focus on effort or time. Effort means "we want to reach a state where we/someone else feels confident about this particular part before we leave it" while time means "we want to spend the fixed amount of time we have in a way so that we've looked at everything at least briefly". In the former we're doomed to not be "done" if something runs late and leave the project management with little choice but delay e.g. a release if they don't want to take on a considerable risk (leaving parts completely untested). But this also allows us to argue for the need of more testing better by saying "we haven't even looked at these four features" and we'll spend less time revisiting areas having to "relearn them" as an area is left more in a "done state".

In the latter we will hopefully have found at least the most obvious, serious problems in all areas of the product and can thus make a general assessment: "our testing is fairly shallow in most areas and we would like to spend more time with the product but from what we've seen all of the new features seems to be okey at a basic level". The drawback with this is it's harder to argue that we actually do need more time to test if stakeholders aren't well informed and comfortable with the concept of "uncertainty" as well as a greater risk of needing to revisit areas for further testing.

How does this relate to estimates? Well in my experience the effort approach prevails when we have estimates and plan our work around these actual stories since "we shouldn't" have too many open stories at once and when they are closed, they are closed so we have to "complete them". In the same way the time approach is more common when we skip estimates (my experience), at least when in combination with a separate test planning. If we have a set deadline (e.g. sprint end or release date) we can more naturally plan in a way so that we've at least looked at everything once.

I say this because most stakeholders I've been in contact with seem to prefer the time approach but still argue for the importance of estimates and these concepts seem a bit contradictory, at least in the context of testing. One last note: The contradiction problem still applies to development: If we get a bit of everything ready we will at least have some kind of product in the end but as described earlier: Since more has to be in place for a piece of software to make sense combined with that estimates should be easier to get accurate enough; the contradiction is not as big of a deal (my biased opinion, feel free to challenge this).

Wrap up of estimate or not

To wrap up:

Since time estimates of testing will always be plagued with great levels of uncertainty, no matter the estimation effort, the question is if detailed estimates really provide enough additional value to support their cost (both time and the risk of misleading stakeholder to believe we know when we'll be "done"). The "ideal" level of estimation is also highly context dependent, the context changes over time and we can't really objectively measure whether our approach is good or bad; so we'll have to experiment and rely on our ability to observe/analyze.

... and finally don't confuse time estimates with planning; Planning is made in both cases and the effort spent on planning has no/little correlation to the effort spent estimating (my experience).

Stay tuned for part 2...

25 August 2015

How to practice software testing

During open season after Erica Walker's presentation at CAST, I mentioned a few useful tools for practicing software testing or software testing related skills (rather than passively watch/read/listen). With this blog post I want to expand that a bit and share some of the applications/sources I've found useful when actually practicing to become a better software tester.

General

Bitnami provides simple installers (local install) for several well-known web applications such as Wordpress, Moodle and ExoPlatform. The installer automatically sets up a web server, database server and the application itself. This is a great sandboxed environment for you to play with and you have access to the application's code and database content allowing you to do pretty nifty stuff.

Since the applications available on Bitnami are fairly large systems you'll find opportunities to focus your testing on basically any quality characteristic or test technique no matter which one you choose. Why not try a full equivalence partitioning, value selection and variable combination testing table for the post thread as moderator form in phpBB or a usability analysis of PrestoShop?

The drawback with these big applications may be that they are a bit intimidating/take time to learn. In that case try the many software download sites like SoftPedia but be aware that some software you find this way might come with various malware.

Joining open source projects can also be a good way of practicing testing while also giving something back. Popular open source code management sites like GitHub and SourceForge are great places to look for applications in need of testers.

Database

Install XAMPP (fully preconfigured web server) and start running queries against the MySQL server. This also gives you the ability to practice writing (and running) simple scripts in e.g. PHP to manipulate/display the database content. Getting familiar with phpMyAdmin (preinstalled) is also a good idea for any web application tester.

If you want to practice online I recommend Head First Labs. You might need a MySQL reference (available online) though to solve all their exercises since they reference to pages in an O'Reilly book.

REST API

A great place to take your first few steps in API testing is predic8. They have an online REST-API available that you're free to play around with. I recommend fetching Postman and just start making simple GET requests. Use predic8's tutorial to help you progress.

Security

Tons of applications exist for the sole purpose of practicing security testing. These applications have dozens of vulnerabilities built in so that you can practice triggering and exploiting these vulnerabilities without risking to break anything. Also, many of these applications have active communities built around them where you can get help or documentation, explaining the various vulnerabilities.

WebGoat (web testing, local installation)
Have only briefly used this but from what I've understood this might be the best choice available. If you search for WebGoat on YouTube you'll find dozens of tutorials, demonstration and installation videos.

Google Gruyere (web testing, online)
I've played around in Google Gruyere quite a bit. It's a good place to start and convenient since no installation is required. Also, due to it's fame, several videos exist demonstrating vulnerabilities in Google Gruyere and explaining the thinking behind discovering them. One example is Alan Richardson's video.

bWAPP (web testing, local installation)
Only briefly used bWAPP but seemed like it had potential. bWAPP is more helpful than Google Gruyere in the sense that you're informed about what vulnerability each page has.

BodgeIt Store (web testing, local installation)
A web security practice application aimed towards beginners (if I interpreted the description correctly). Haven't tried this one myself.

Mutillidae (web testing, local installation)
One more I haven't tried myself. What I liked in the description though was it did claim to give hints which likely make it a good starting challenge to a new penetration tester.

GameOver (web testing, VirtualBox image)
I discovered GameOver as I was writing this blog post. Haven't tried it yet but it's a VirtualBox image with several penetration testing tools and web security practice applications preinstalled (such as WebGoat). Convinient!

There are also pages dedicated to learning web security by giving the visitor progressively harder challenges (puzzles) to solve. My personal favorite is HackThisSite as I think the challenges are progressing in a good pace and you can always get help if you're stuck. For a quite extensive list of practice pages, take at look at the top answer to this Stack Overflow question.

If you want to practice system level penetration testing I recommend the Metasploit Unleashed free course. Also look into Kali Linux, a Linux distribution centered around penetration testing.

Information gathering

Information gathering is a critical skill for testers. A course I've just started that seems to have great potential is the Google's Power Searching course. The course comes with challenges making it interactive enough to fit this blog post.

Improve your skills in other common tools

You can improve your skills in many common tools by using training videos released by vendors or users and either mimic what's done in the videos or perform the challenges given. One example of a training video collection I've found useful is Microsoft's Office course.

Operating systems

I learned a lot about operating systems in general when I started playing around with Linux. It's a fun way to start and the amount of help you can get when stuck is mind-boggling. If you have some previous experience; give Arch Linux a chance. If you're new something like Sabayon might be at the right level. Popular desktop releases such as Ubuntu may be a bit hard to get "under the hood" in but for a first timer just seeing a different operating system might be enough. In that case, go with OpenSuse or any of the Ubuntu derivatives (e.g. Ubuntu itself, Linux Mint or Elementary OS).

If you don't want to tinker with partitioning; use VirtualBox.

Networks and servers

Plenty of material is available online and practicing it is generally just about tinkering with your own home network, e.g. figuring out what various router configuration options do. You don't need an expensive lab to practice network administration and server setup; two devices (e.g. a computer and smartphone) and a basic network connecting them (e.g. a router with a built in wireless switch) is enough. If you feel like you don't know where to start, use for instance the course page linked to in this chapter and try the concepts described (like port forwarding, dhcp and file servers). I personally find network monitoring to be a particularly useful topic for testers.

Conclusion

The two most important messages I want you to remember:
  1. Do practice, it's important
  2. ... and not that hard
Good luck and if you have additional suggestions, just share them in the comment section below!

07 August 2015

CAST 2015 - A quick summary

Every conference creates some kind of special memory for me; Let's Test 2013 was meeting Helena Jeret-Mäe for the first time (the most important event so far for me as a tester) but also Richard Robinson and his Miagi Do challenge. CAST 2013 was my first talk which was special but also Dawn Haynes' magical keynote. Let's Test 2014 was barefoot introduction and NTD 2015 was the Pekka Marjamäki show. Also, as an important bonus both Let's Test 2014 and NTD this year, I had Helena; meeting her in person is always very special for me.

So what about CAST 2015? Well there were three people that stood out to me (in no particular order):

1) Ioana Serban caught my attention at Let's Test 2014 and at CAST she spoke for the first time... and she nailed it! The talk had high value content, the most awesome slide deck I've ever seen and she packaged all this as an entertaining and compelling story! I feel fortunate to be one of the people in the room that got to experience it live! On top of that she’s a smart, wonderful person I just enjoy being around!

2) Diana Wendruff has this lovely, sparkling personality that makes me happy just by being around. Her humble curiosity, charming humor and clever insights on top of a very empathic core is nothing but awesome. I so look forward to meeting her again! Oh, and she has the coolest business cards, ask for one when you meet her!

3) David Leach (referred to as Kiwi-David in my tweets) was one of those who just made the whole conference better for everyone. He was a first timer but gave a great lightning talk (the guy can present!) and even more importantly: He's incredibly skilled at asking good questions during open season, that created massive amounts of extra value for both speakers and attendees (probably the most active participant during open season throughout the whole conference). On top of that he's smart and curious. Thank you Dee Ann for help bringing him there and thank you David for making a great conference even greater!

Oh, one more... I want to highlight the Speak Easy initiative. I happened to go to three of the Speak Easy presenters and I sincerely think that was the three best talks I attended, which is absolutely crazy considering the little experience these speakers have. I've already talked about Ioana, the other two were Kate Falanga (talking about understanding the brand you create for yourself) and Jessica Ingrassellino (taking about the art of asking questions). I also heard great reviews about Carol Brands’ talk (which I missed). I was amazed!

I could keep name dropping forever (Perze, Taylor, Pete, Liz, Mark, Jessica, Roxanne, Dawn of course...) but instead I'll stop and just say: THANK YOU everyone who made this conference amazing! And an extra thank you to the people who not only attended my talk, but made it better by adding to it during open season and after. Oh, and to all the organizers (including facilitators and the staff in the reception), I’m so impressed by the effort you put in, thank you!

To finish off with something more useful; here are my top three takeaways from CAST 2015:

1) You can turn a "boring" (but important/valuable) topic into an entertaining story! Probably my number one takeaway and something I'll definitely use, thank you Ioana!

2) Nicholas Bolton (please correct my spelling Niclas, Nicolas...) shared this wonderful analogy with me: When trying to decide what solution to go for you sometimes have to look at it as being in a maze; if you run around like a headless chicken you'll (likely) not find your way out in time and die. On the other hand; if you stand still and only debate where to go, you’ll starve to death having accomplished nothing. At some point you have to stop arguing about which solution to go with and actually try one.

3) Helena introduced me to an interesting problem prior to CAST: She, and I, are both getting to a level where we can sort of look down at everything in an organization and actually have the authority to deal with problems we see at various levels. However, we can easily identify problems that would occupy 5000% of our time for the rest of our lives as well as impact people in extremely complex ways. So how do we choose where to put our effort?
...
Roxanne (congratulations to becoming a board member!) commented that maybe that problem partly comes from the fact that we're new to this position/perspective; we probably felt the same way once upon a time, with the "lower level stuff" (but experience have thought us how to navigate that). My take on her comment is; maybe we should worry less about "figuring out what is right" cause we don't have the necessary experience and understanding yet, instead we should head out and explore with learning being the main objective. Aiming to solve "the right thing" currently just makes us stall and feel inadequate (ties back to takeaway two) which isn't helpful.

So, once again, THANK YOU! I hope to see all of you soon again! CAST 2015 was awesome!

29 June 2015

Digital K-Cards

The page is taken down since I felt I didn't need a web server anymore.

If you want the code, drop a comment or contact me via Twitter / LinkedIn.


A few days ago I was still in the Alps with my colleagues at House of Test, mixing some recreation time with test discussions. During one of these discussions we had a lot of parallel tracks going on and the immediate question when people raised their hand was "is this on the current thread or is it a new one?" which took a lot of focus away from the topic. Some of us started improvising K-Cards. I used a background color app which cycled through a dozen different colors, Carsten had something similar and Ilari used two different glasses of beer.

After this discussion me and Lars Sjödahl joked about creating a K-Card app. A few hours of programming later and the K-Card "app" was born (click anywhere on the card to change color):
K-Card example: http://brickarp.se/kcard/index.htm?text=5&image=http://brickarp.se/kcard/hot.png
K-Card information: http://brickarp.se/kcard/info.htm
K-Card setup form: http://brickarp.se/kcard/setup.htm
K-Card generator: http://brickarp.se/kcard/get_card.php

Main features:
  • Turns your phone (or similar) into a fullscreen K-card.
  • Card number, logo on the card and whether or not to include the rat hole card can be customised.
  • Can be used offline (no server components for the actual card).
  • A card generator created for e.g. conferences helps provide unique card numbers for all attendees.
If you have suggestions, questions or find bugs, leave a comment or contact me in any other way you prefer.

Finally: Thank you Andreas Cederholm, for spending a few moments to test the app!
Notice though: It's probably still quite buggy due to several changes after Andreas tested it.

Enjoy!

24 March 2015

Testing Education: Curriculum

I promised a long time ago I would write about each of the subjects in the testing education in detail. I've learned that's not as straight forward as I thought. I need to be careful about what I say/imply about the students, course content, exercises etc. (law, policies, respect for the school (content ownership) etc.).

Due to that I lost the energy to do the detailed posts. But, since I receive a lot of questions about curriculum, here's a post describing the curriculum on a more general level. The subjects below are arranged in the same order as they are taught to the students but the items in each of the bullet lists are not sorted in any particular way. Also the lists are not complete in any way, they're basically what I find noteworthy among all the big and small things brought up. Finally from test design and forward it's not as detailed since those subjects haven't been finished/started yet.

Introduction to testing (4 weeks)

  • What is testing and why do we test software?
  • Note taking and Visualization (mainly mind maps)
  • Heuristics
  • Basic test design
  • Risk
  • Oracles
  • Coverage
  • Bugs and bug reports
  • When are you done?
  • Tools (e.g. Sikuli, Selenium, VirtualBox, Apache and jMeter)
We started this course by; first thing, first day; give the students a flash game to test, a basic testing mission and a bare minimum of introduction to get started, just to let them get a minimal practical idea of what testing is. The rest of this course was a combination of theory and practical testing; sometimes we started with letting them test and then explained some concept they had used (e.g. oracles) and sometimes we did the other way around; e.g. when explaining risk we first gave a lecture and then let them create a simple risk analysis of an application before they actually got to test it.

The testing in this course was (of course) very superficial and the goal was to introduce many different concepts to give them a "foundation" rather than focus on teaching them one part really well, All in all the students, in one way or another, worked with ~10 different applications including desktop (Windows/iOS depending on their laptop), web, mobile and, in some cases, Linux desktop applications using VirtualBox.

You have to remember that the students in many cases came fresh from high school and/or did not have any technical background so it was just a brief introduction to the concepts and tools mentioned.

Think like a tester (6 weeks)

  • Critical thinking
  • Lateral thinking
  • Heuristics
  • Bias and Fallacies
  • Problem solving
  • Test polarities
  • Models and mental models
  • Information gathering/learning
The general setup was similar to the one used in the introduction course however, during the "think like a tester" course we added a lot of general exercises (e.g. what can you do with a brick, the alien tour exercise and many other) to compliment the theory and practical testing.

During this course, James Bach visited the class in Malmö and my class in Örebro joined in via video. A great opportunity for the students to see one of the experts often referenced, in real life. The highlight was James testing the same application as the students had tested as part of their examination for the introduction course. Malmö had several visits from prominent testers (thanks to Öredev) but I leave it to Martin and Maria to speak about those.

Project contexts (3 weeks)

  • Lean and Agile
  • Scrum and Kanban
  • Waterfall and V/W-modell
  • Outsourcing
  • Testing in agile
  • Common challenges for testers
The most interesting part of this course was probably a pretty detailed lecture and discussion on agile testing followed up by a big exercise where students were asked to identify risk in various contexts (like isolation etc.) and what to do to mitigate these/solve the problem.

Programming (6 weeks)

  • Java
  • Automation
  • File systems
  • Programming theory (e.g. compilers, bin/hex and memory allocation)
  • TDD/unit tests
Most of this course was used by the students to program their own testing tool (tool to generate and interpret test data). I'm working on publishing some of them. This has by far been the most challenging course making students work day and night to get their examination applications ready.

Test methods (4 weeks)

  • Schools of testing
  • ISTQB
  • TMap
  • Context driven testing
  • Myths
We teach a context driven mindset and the primary objective with this course was for the students to learn about other ways of looking at testing. Most focus was spent on ISTQB since it's so common and the students got to read the syllabus in detail as well as discuss its strengths, weaknesses, goals etc. in true "critical evaluation style".

Test design (24 weeks, 50%)

  • Test approaches
  • Test techniques
  • Technology
  • Heuristics
  • Risk based testing
  • Coverage
  • Oracles
  • When to use what?
  • Security testing
  • Performance and reliability testing
  • Usability and UX testing
This course runs in parallel with the test strategy, bug reporting and test reporting courses described below and is (by far) the biggest course in the education. The goal is to make it rather practical letting the students use their newly acquired knowledge as quick and much as possible, thus the courses will kind of intertwine as it would be pretty wasteful/fabricated not to do test design, test strategy and test reporting when practicing bug reporting etc.

Test strategy (12 weeks, 50%)

  • Risk based test management
  • Heuristic test strategy model
  • Visualisation
  • Testability
  • Test framing
  • Test processes
This is the other course starting this week. The most important goal is to make students comfortable when requested to create or present their strategy.

Bug reporting (6 weeks, 50%)

  • Content
  • Different receivers/readers
  • Bug pinpointing
  • Reasons not to correct errors
  • Oracles
  • Bug trackers
  • Style and structure
  • What's a bug and what is not?

Test reporting (6 weeks, 50%)

  • Rhetoric
  • What are we reporting and why?
  • Metrics
  • How not to misguide the reader
  • Style and structure
  • Different receivers/readers/listeners

Testing tools (6 weeks)

  • Various categories of tools
  • Introduction to common tools
The main reason for this course is students in earlier vocal university software testing educations (not taught by us) felt practical testing and a basic knowledge of tools were the biggest parts lacking in their education. Apart from tools, being the last course we will have to spare some time to talk about practical tips and tricks preparing the students for the internship and working full time as testers.

Internship (8 weeks)

To finish off the education students are sent on internship at various companies. If you're interested in taking on a student, feel free to contact me in whatever way you prefer (email, Twitter, LinkedIn, as comment, buy me a beer at some conference etc,). By the way; I've not forgotten about you Karlo, students have been informed.

Learning more

You can read about the education (Swedish) on EC Education's homepage. If you have any further questions or are interested in attending the next iteration of the education (Malmö or Örebro) in September, don't hesitate to ask.

22 November 2014

Report from a software testing education

(written in the middle of the night so you'll have to excuse any strange language or grammar)

Background

I'm currently teaching a 1½ years software testing education. The education is situated in Örebro, Sweden, and there's a similar one in Malmö, where Martin Nilsson is the teacher. In this blog post, I just want to briefly describe the education since many have shown an interest in what Martin and I are doing. This post will not go into any depth about exercises or lectures so in case that's your only interest you'll have to wait. The plan is to cover individual subjects in separate blog posts.

The Education

The education has a clear context driven touch to it, which is no surprise considering the teachers and that the curriculum was created by Henrik Andersson. Since it's the first year its run the material is created in parallel with the education. This has turned out to be a lot of work for me and Martin but it's incredibly rewarding.

Teaching style

This is a vocational university education meaning the students are expected to acquire a high level of practical proficiency or, as I prefer to describe it: Become awesome testers. Our approach to achieve this has been to mix lectures, discussions, exercises and a lot practical testing. Exercises both include exercises closely related to testing (like the critical thinking exercise I've written about before) as well as exercises designed to teach some aspect in a more general way. I'll tell you more about the exercises in later posts.

A lot of emphasis has also been put on creating an atmosphere where students feel safe to stand up, ask questions, question material and even, at times, get cross-examined. Considering it's a class of 30 students that's quite a challenge but we're getting there and I must say students are doing their part of the job splendidly.

The Students

While on the topic; the students range from fresh high schoolers to people taking breaks from well paid jobs. So far the students have completely blown our minds! I had 1/3 of the class attending a local test meetup last week in their spare time, their school results are amazing, during the first course me and Martin had to up the tempo (compared to our plan) roughly with a factor of 4 to keep up with the students and there's no sign of anyone slowing down.

So far, just in Örebro, they've sent in test reports to two companies, reported bugs to two more and I'm planning a project with Securitas Direct where they will briefly be part of a live project (focus is on learning project contexts and improve their ability to test but showing off for a potential future employer is never bad as an added bonus). And in Malmö all students attended two days of Öredev (thanks to Maria Kedemo), but I'll leave it to Maria and Martin to talk more about that.

The Subjects

So far two courses are finished: Introduction to testing and Thinking like a tester. I will cover the content, outcome and lessons learned from those in separate posts. A third is underway: Project contexts and finally there's a Programming course before the year ends. Next semester there's Test methods, Test design and techniques, Test strategy, Bug reporting and the final semester consists of Test reporting, Tools and finally there's eight weeks of internship to round up the education.

The Content

How do you set up 20 hours of teacher-led education in 10-20 hours (sometimes in topics you have very limited experience in)? Well, first off, you don't; but together, me and Martin are slowly closing in on 40 hours a week. A few things I've learned:
  1. I created content ridiculously quick a few times and that quickly raised my general tempo (a bit like how you practice speed reading), so boring, but practice makes perfect.
  2. Since I don't have time to practice what I'm about to say I try to think about key points I want to make and never put more than one on each slide to help me stay focused,
  3. Use the community! Ask friends for help, get some coaching, copy or get inspired by other testers' exercises etc.
  4. Google image is your friend when you dislike text in slides!
  5. When designing my own exercises, I typically start with the one thing I want the students to learn (there are almost always added bonuses but I don't plan for those), From there I try to get to some context where this is very common and finally try to figure out a way to simulate this where I remove distractions or I try to find a way this can be isolated/highlighted while testing. I'll go into this in greater detail in later posts.
  6. I force myself to focus on what's important. I often get caught up in, for example, trying to find that one perfect image or real life example but I've learned if I can't find it in 1 minute I will unlikely find it in 30 so I need to (if not critical) reconsider what I'm looking for.
  7. I always strive for simple! It's tempting to show off by teaching the students unnecessary complicated things or making exercises overly delicate. They will forget a lot of what I say so time is, in my experience, better spent trying to teach them what's important, in an as clear and focused way as possible and repeating the key things over and over is typically good as long as you change approach (lectures, exercises, practical testing etc.).
  8. I often find that exercises can, after run once, be slightly tweaked to either cement what I've been trying to show/teach the students, let them use their new knowledge/test that they've learned from their mistakes or to teach another aspect. And tweaking an existing exercise is typically much less work that to find/figure out a new one!
Sources are anything from articles and books to blogs and experience (from testing, workshops etc.). Finding good, reliable sources is also an area where I've gradually improved. Stories seem to be easier to relate to so I try to use my own experiences as much as possible. To make sharing between me a Martin easier we typically create simple, elegant slides (one or a few images and avoiding text) with a lot of notes. This works well for us when presenting the other person's material as well as for students when they have to read up (speaking about the notes right now). I should say the fact I have a deaf student in my class also is a factor here. She asked me early on to write notes for the slides since she had problems looking at the slides, the interpreter and take notes at the same time. So she should have a lot of credit as well for the detailed notes.

There's no limit to what I could say about creating content but let's stop there.

Company contact

Being an education requested from companies in the area, it's pretty natural to have a close cooperation with them. So far I'm still building my testing network in Örebro (I live, and typically work, 12 kilometers away, in Linköping) but I'm getting there. The goal long term is to get students connected with various companies long before the internship starts. We'll see how that goal turns out.

Internship

If you keep reading about the education and think it sounds interesting keep the internship in mind as many students are open, or even actively seek, to work somewhere else than in Örebro. Mostly in Sweden (Gothenburg, Stockholm, Västerås, Karlskoga/Karlstad and Helsingborg) but also abroad such as Australia, Austria, USA (especially San Francisco and Columbus) and Russia, just contact me if you wanna know more or feel like you could use one of these students at your company. The internship doesn't start until October next year but never hurts to get the connection done well in advance.

Wrap up

So that's the boring part, next up is "Introduction to testing"; what, how and why we taught the students the things we did and what we (Martin and I) learned from giving this course.

By the way: The education use #YHTest on Twitter in case you want to see what the students have created (used sparsely so far).

13 November 2014

Exercise: Critical thinking in software testing

It's a hectic period right now (which explains the lack of presence on Twitter, email, the blog and everything else basically) and that will probably continue for the foreseeable future. But in the middle of this I would like to share an exercise I gave my students. Oh, I'm the teacher for a software testing education these days, in case you didn't know:
  1. Pick a page (for this exercise, preferably a simple one).
  2. Just looking at it: Form as many questions as you can think of.
    (to save yourself some time, just write down what you find potentially relevant)
  3. Try to answer the questions formed, don't forget to use the rule of three*
* If you can’t think of three things that might go wrong with your plans, then there’s something wrong with your thinking. // Gerald Weinberg

So what's the goal? To practice critical thinking, to spark curiosity, to find a way forward when stuck/find different perspectives, to reveal important questions and to generate ideas for improvements. May sound a bit silly but give it shot!

Example of the "forming questions" part (Square - Login form):
  • What does the logo communicate? Is it representative for the company? What else could be representable? What other locations of the logo would make sense?
  • Why is it so minimalistic? Is there a specific reason? What options exist?
  • What if the logo was bigger? WHat if everything was bigger?
  • Could the sign in box's bright color be a problem on a bad monitor?
  • What options are there to the text "sign in" in the sign in box header?
  • Are the textbox placeholders HTML5? If so, how will the page work in a browser not supporting HTML5? If not HTML5 placeholders, how will it handle autofill by the browser? Risk of label overlap?
  • What if the "header bar" (saying "sign in") was darker? What colors could be suitable?
  • Is email a good choice for "identifier" (compared to e.g. username)?
  • Password is obscured, why not do the same for email?
  • The logo is not clickable, should it be?
  • How do you get back to the start page?
  • What options are there to the text "Forgot password?"?
  • The text on the sign in button is a bit blurry, would a different font help? Size? Text weight? Color?
  • What's the reason for the choice of languages?
  • What's the difference between English and Canadian english?
  • What's the purpose for the bright logo just beneath the language link?
  • Why can't you tab your way to the language link? Could that be a problem?
  • Any options to the language link (e.g. a flag)?
  • What other fonts could be considered?
  • Is it clear what page you're on (Square)?
  • Is an account free to register? Should this be stated somehow?
  • How will the page signal an incorrect email/password?
  • There's no copyright information, could this be a problem?
  • There's no footer at all, any typical footer information missing?
  • ...
If not clear: This is described as an exercise but the principle, of forming a ton of questions to get you forward when stuck, get a new perspective etc. is perfectly valid whenever you're testing (as many before me have stated, not in any way taking credit for that one :)

Good luck!


11 October 2014

To testers in Sweden: Get my job!

This is only for software testers interested in working in Linköping.

Under sommaren flyttade jag från Securitas Direct till House of Test. Nu rekryterar Securitas Direct för att ersätta mig och jag tänkte berätta varför du borde söka:

1) För gruppen: Utvecklingsteamet är en otroligt skön grupp med en bra mix av personligheter och en otrolig passion för det de gör. Låter lite klyschigt men när jag säger passion, menar jag det verkligen. Det byggs flitigt olika projekt på fritiden, pratas teknik och med allas brinnande intresse så är det svårt att inte själv entusiasmeras.

2) För din utveckling: Du är fri att testa nya grejer, läsa på och fortsätta utbilda dig på andra sätt. Skulle också vilja lyfta att Securitas Direct är ett av de mest generösa företag jag stött på när det kommer till kurser och konferenser!

3) För kulturen: På Securitas Direct lyssnar man på varje enskild individ, och då menar jag verkligen lyssnar. Man har verktyg för att anställda på alla områden ska kunna lyfta sina idéer, kreativa medarbetare belönas, det finns en påtaglig positiv attityd rakt igenom hela företaget och det är en väldigt platt organisation där högsta chefen syns och lyssnar, oavsett vem du råkar vara. Vill du uppleva agile, så som agile var tänkt så kommer du trivas.

4) För chefen: Markus är en av de bästa chefer jag kommit i kontakt med! Han är en trygg chef som litar på sina medarbetare, lyssnar och bryr sig, nyfiken, vill helhjärtat att alla ska utvecklas och är en beundransvärt skicklig/inspirerande visionär. Du kommer tveklöst vara i goda händer!

5) För produkterna: Snabbt föränderliga produkter med variation och "edge". De har en helhet och det du jobbar med finns snabbt i händerna på kunder.

6) För friheten: Du är i princip helt självorganiserad vilket är otroligt utvecklande, fritt och spännande! Jag upplevde att melodin var: Frihet föder ansvar (snarare än frihet under ansvar som för mig är mer kontrollerande).

7) För företaget: Ett innovativt företag med en platt organisation där alla är involverade. De står även starkt upp för sina egna moraliska värderingar, någon man känner, och blir del av, som anställd.

Den uppenbara frågan nu: Om det nu är så bra, varför lämnade då jag? Svaret: För att jag ville jobba med coachning och utbildning av testare, något som mycket få produktbolag har plats för (inklusive Securitas Direct). Men jag är också den ende som hittills lämnat utvecklingsavdelningen, det säger något i sig själv!

Utvecklingsavdelningen har än så länge bara en testare. Den växer, så det kan mycket väl ändras tämligen snart men viktigt:
Att vara ensam testare ställer högre krav på din självständighet. Dock bryr sig organisationen (speciellt utvecklarna) om testning så du är inte ensam om att kämpa för test, Linköping har ett starkt test community vilket hjälper, samt att ingen har förväntat sig att jag ska kunna allt utan istället uppmuntras inlärning. Själv kom jag t.ex. in med mycket liten erfarenhet av webbtestning och hade egentligen aldrig jobbat ensam men jag har inte bara överlevt utan njutit av resan. Fördelen med att vara själv för mig, har varit friheten, den lodräta personliga utvecklingskurvan och mångfalden i det jag testat/fått göra.

Ett sista argument:

När Markus (din blivande chef, längst till höger på bilden) fick höra att det fanns en utvecklarkonferens i Linköping var budskapet: "Alla som vill gå säger bara till så bokar jag in er", samt att han själv gick. Fokus på lärande och personlig utveckling som sagt.

Missa inte chansen, ta mitt jobb! Det är ett drömjobb på en drömarbetsplats om du vill utvecklas som testare!

Sök här:
https://adecco.zerolime.se/show.aspx?src=internal&tag=feed&id=962BD45E-2658-4C71-96BA-435197D32909

/Erik Brickarp, numera House of Test

27 August 2014

Why we need realism in testing

I was sitting on the train. As the train attendant passed a woman right behind me called out "Excuse me, is that ticket reader from 2013?".

- I'm not sure, why do you ask?, responded the train attendant
- Well, this morning I noticed the train attendant's ticket reader seemed very bulky so I called the railway company and they rudely told me it wasn't a problem anymore since they had upgraded the model in 2013. So I was wondering if that's the new or old model.
- Well, I don't think there's a newer model but honestly I don't know, what I do know is this one is at least not built for me.

They started talking about various problems the train attendant and her colleagues had experienced. Of those these seemed to be the biggest ones:
  • The device was made for much bigger hands than most train attendants'.
  • The device was way too heavy.
  • The device was tough to get a grip around and maneuver, probably even if you had "right sized hands".
The most alarming result was the amount of people on sick leave had grown notably since the current ticket reader was introduced, according to the train attendant, and the main reason was reportedly over-stretched arms.

The biggest issue with the current ticket reader seemed to be the humongous battery on the back. At this point I had curiously joined the conversation and commented that the battery's size seemed unreasonable knowing even several years old, cheap, small smartphones had batteries lasting for days doing much more complex tasks. The response to that was quite telling:

- The one we had before was much easier to carry but the battery didn't last long enough.

At this point I suspected two things:
  1. Someone was informed: "We need to improve the battery on our ticket readers". Armed with that information, this someone wrote a spec saying "same functionality but better battery" and the humongous battery was the quick fix, a seemingly identical device but with more power. Implicit requirements such as "it should weight a maximum of ... grams" or "it should fit the hands of our train attendants" was most likely not considered.
     
  2. Whoever tested this didn't test it under realistic conditions such as for many hours, trying to reach over other passengers, walking long distances carrying it and so forth. Another possibility is the casing and hardware was "just an ordered standard third-party device" (probably not built for this specific purpose) and all focus was on the software. A third option would be no testing was done since it was "just a change of battery, which will not change anything to the device, only improve it".
I'm still very curious if any acceptance testing was done by the railway company. If so, under which conditions, for how long and by who, actual train attendants? If not, why not? (I can list many plausible answers but still curious)

Continuing the conversation the woman initiating it started to list a few simple ideas, just from the top of her head, that could had eased/solved the problem. My personal favorite was: "Why don't design it so that the battery pack is in the belt". That seemed like a minimum effort, low cost solution to both the old and new problem. Knowing the current state though I suspect the manufacturer would probably had delivered a too short power cord.

I thanked this wonderful women who asked the question, as well as the train attendant. I got a valuable reminder of how important "dogfooding" and testing a product under realistic conditions are, as well as a good reminder of how much you can learn from just being curious.

My message with this story: Yes, it's often inconvenient to figure out and set up realistic conditions when testing but failing to do so can be catastrophic. As the train attendant pointed out: "Some days when my arm is aching I just look at the tickets and assume they are valid".

Update


Here's an image of the ticket reader. Notice the battery (bottom left) and how it obstructs a "natural grip" around the device. This train attendant also had notably larger hands than the train attendant mentioned in the original post (two different train attendants) to further emphasize the problem.

Finally, when I snapped this picture the train attendant immediately responded: "Are you gonna get us a better ticket reader?". Pretty telling comment.