28 September 2016

Next step

After a colleague pointed out I'm not 29 years old anymore I had to revisit my About page. While reading the rest of the text this sentence made me stop...

Next step is to spread the knowledge I've obtained and be a thinking practitioner who can hopefully do what James once did for me.

I wrote that sentence over 4 years ago, before any tester outside Linköping knew I existed. So I took a moment just to reflect.

"Spread the knowledge I've obtained"
I've been teaching students in Software Testing at a vocational university for 1,5 years, I've given numerous lectures, presentations and workshops on various topics at conferences (national and international), meetups and at my workplaces and I'm now test coach at Verisure, a role in which I'm basically paid to share the knowledge I've acquired so far. Finally I've, for most of the period, been a fairly active blogger.

"Be a thinking practitioner"
Transpection Tuesday, my blog, my peer conference appearances, my many dinner discussions with top notch testers (most recently Göran Bakken), my (former) activity on Twitter all add to the "thinking" part while my testing and test related experiments at Ericsson, Verisure and Zenterio, as well as my effort during Software Testing World Cup all add to the "practitioner" part.

Most important though: The two are not separate processes. The best example of this is probably Transpection Tuesday. During Transpection Tuesday, Helena and I often discuss a challenge one or both of us have, together we work out potential solutions or experiments to run, we go back to work to try these solutions/run the experiments and finally we share/ask for help to evaluate the results at a second Transpection Tuesday. Like I said, one process.

"Who can hopefully do what James once did for me"
After graduation I got emails from two former students, both made me feel I've accomplished exactly this...

... hmm, is it raining inside or why are my eyes moist all of a sudden...

On top of that other former students, testers and other colleagues (most recently a developer) have all helped me understand my efforts to inspire, guide and mentor have actually made a difference.

It's not without pride I say: In four years I've wildly exceeded my own expectations based on my memories of what I hoped to achieve 2012. Not just my expectations for the next four years but potentially for my whole career. Shit... I need a break.

... pause music...

What's my "next step"

Took quite a bit of thinking but here's my updated "next step", or mission if you prefer that:

I've presented to, spoken with, inspired, coached and mentored testers in testing and quality. I want to continue this but to a broader audience, in ways different from what has been done before and inspire others to present, speak, inspire, coach and mentor as well.

Broader audience refers to e.g. developers, students (not studying testing), managers etc.

If you happen to read this in few years, do remind me to report my status, please.

What's your "next step"

Enough about me, what's your next step?

24 August 2016

How to come up with test ideas


This post is an attempt to gather the various tools, tips and sources I've found useful to improve my ability to generate test ideas.

Notice that the various concepts described are huge and I will only briefly describe them in this post, it's up to you to learn more and put them into practice. A good start would be to check out the many articles I've linked to.

To developers
Important! The techniques and sources I will describe are just as useful when developing, example:

"This will create a separate log file, what do I need to take into consideration when that file is created..."

If you want to develop high quality software, learn the fundamentals of test design!

Common product risks

Risk catalogs
Risk catalogs are lists of generic risks (things to test) in common components/functions.

You will of course need to judge what's relevant in your context and risk catalogs aren't flexible the way actual thinking techniques are but they are still incredibly useful:

Quality characteristics
Quality characteristics, also known as quality criteria or quality attributes, are an attempt to break down quality into meaningful/helpful categories. When generating test ideas these categories help us see the product from different angles.

Generic risks
Some aspects are (almost) always worth taking into consideration when assessing risk e.g. "is this new functionality?" or "does this module have a history of being buggy?". I can't say this information helps me generate new and creative test ideas but it helps me focus my effort on risky areas thus still relevant.

Test techniques

Test techniques are sort of "compressed tester mindset". The set of 9 techniques defined in the heuristic test strategy model is my personal favorite but there are many different sets out there.

Notice that each technique is way too broad to cover in this blog post but if you want to "learn to think like a tester" I urge you to learn more about and practice them!

Function Testing
Identify and test each function individually.

I prefer to begin with creating a list or mind map of all functions I can identify in whatever I'm about to test. A function in this case is anything the application can do e.g. save a log message, identify that a new version is available or show an error message. To do this, touring is a useful practice described in the links below.

When you've identified the functions, focus on each one individually, example:
"How can delete entry be tested? Successful delete, try to delete an already deleted entry, try to delete when lacking access, stack deletes of the same entry, lose connection to data source when deleting, modified delete request (e.g. SQL injection), potential timing issues, delete different entries in rapid succession, ..."

Domain Testing
Identify the input and output data in the application then look at which of the possible values/combinations of values you should test.

After identifying the variable (input or output) to test, you first group the possible values into equivalence classes. The idea is any of the values within each equivalence class should in theory behave the same. The next step is to select one or more values from each class to test. To do this you for instance look at boundary values, high risk values or realistic values. Finally when several variables need to be combined (such as in a typical form) there are various combinatorial techniques to help you find an efficient way (as few tests as possible) to actually combine these values into individual tests with the level of coverage you're aiming for.

Notice! When reading about domain testing you might get intimidated by all the paper work involved: "I don't have time to do all that for a simple form...". For me the detailed analysis on paper is sometimes useful, especially to practice my domain testing skills, but generally I just keep the various techniques related to partitioning (creating and grouping values into equivalence classes) and value selection in my head and apply them "on the fly". Still: I do urge you to sit down and create e.g. a detailed boundary/equivalence table at least a few times because it's key to actually understand domain testing.

For the combinatorial analysis: Remember there are excellent tools!

Stress Testing
Identify various bottlenecks, input that can be sent in huge volumes or that require more computing power than usual, dependencies, communication to external systems etc. and ask yourself how these could become a problem for the application (e.g. flooding of  requests, dependencies lacking, slow connections/timeouts/lost packages). Also look into the system's ability to restore itself after collapsing (e.g. will the system come back up after a database crash).

Do notice that stress testing often refers to testing the behavior when actually going beyond a system's breaking point. This is important since if you search for "stress testing" you'll mostly find just a subset of what the test technique "stress testing" is suppose to cover. Try also searching for e.g. "robustness testing", "load testing" and "performance testing".

Flow Testing
Testers often reset environments between tests, especially if using test cases, by e.g. closing the application or only focusing on one step in a long flow. The problem: Users don't. The test technique flow testing is thus about focusing on what will happen when a continuous chain of events occur. One way is to actually identify a long, realistic flow (e.g. a "buy, cancel, refund" cycle in a web shop) and test this, another is to not reset between tests (e.g. not close the application or reset data for several days while testing).

Scenario Testing
Scenario testing is, unsurprisingly, about creating a scenario and test this scenario. One important benefit with scenario testing is it can identify gaps in the implementation that other techniques generally don't, as they focus on what has actually been implemented.

Claims Testing
Also known as "specification-based testing". The idea is to identify the claims made about the product and check if they are actually true. Claims can come from a variety of sources e.g. specifications, requirements, manuals, various in-app help sources, public statements or the EULA.

User Testing
User testing is about asking yourself "how can I use users to improve my testing":
  • A user attempts to complete various tasks in the application and walks you (the tester) through how she/he is reasoning. The tester's job is to identify problems the user runs into e.g. not understanding certain parts of the navigation.
  • Use user data to drive the testing e.g. "the most items a user has ever ordered is 104 and in our tests we never order more than 20, we should increase that amount".
  • Personas and other tools helping you "simulate" a user can be used when testing, example:
    "Looking at this from the perspective of 'Nancy, the power user', I realize we lack keyboard shortcuts in much of our application"
Do notice that user testing often refers only to the first bullet above. This is important since if you search for "user testing" you'll mostly find just a subset of what the test technique "user testing" is suppose to cover. Try also searching for e.g. "personas", "ux testing" and "usability testing".

Risk Testing
Identify risks and design tests to cover each one of them.

These risks can either be identified:

... inside-out (ask: what could go wrong here?), example:
"We now allow users to delete their private messages, how could that be a problem? Well what if a user could potentially delete another user's messages? How can we test if that's possible to do?"

... or outside-in (apply common risks to the product), example:
By using Michael Hunter's "You're not done yet" I realize saving the configuration to a read-only folder could be a problem.

The chapter "Common product risks" includes tons of sources useful when applying the outside-in approach to identify risks.

Notice that "risk-based testing" (prioritize testing that's considered the greatest risk first) is not the same as the test technique "risk testing", but will be your most common result in a web search. However, articles about risk-based testing typically covers tools and techniques to identify and prioritize risks making them useful anyhow; just be aware of the difference.

Automatic Checking
How can you use tools you know about to improve your testing?

Do not confuse this with "automation", "automatic checking", "automated testing" or whichever term you prefer, which is a way to actually execute the ideas identified and not a test technique itself; even though unfortunately sometimes used as some sort of technique.

Other useful resources about test techniques
  • What is a good test case, by Cem Kaner
    More information about some of the test techniques above as well as some additional techniques.
  • Lessons learned in software testing (book), by Pettichord, Kaner, and Bach
    The chapter about test techniques is available online as PDF as part of the BBST course. I very much recommend you to get the rest of the book as well.
  • BBST Test design, by Cem Kaner and Rebecca L. Fiedler
    BBST presents a broad set of test techniques. Highly recommended reading in general and for test design in particular.

Critical and lateral thinking

Everything else in this article is more or less applications of critical thinking and supported by lateral thinking. They're thus the most abstract but generally applicable concepts I will describe.

Critical thinking
"The objective analysis and evaluation of an issue in order to form a judgement"
Oxford Dictionaries

Compare this to e.g. James Bach's definitions of software testing:
"Questioning a product in order to evaluate it"
... any similarities?

A more in depth definition of critical thinking has been made by The Critical Thinking Community.

When developing software we make massive amounts of assumptions, simply to save time, example:
This framework will prevent..., this is what users want, this is the simplest way to...
Critical thinking is essentially the skill needed to identify these assumptions.

I also designed a critical thinking exercise for testers, which might help.

I would love some more (and better) sources on critical thinking, so please drop a comment with your suggestions!

Lateral thinking
Lateral thinking is simply put the skill to think outside the box.

"The solving of problems by an indirect and creative approach, typically through viewing the problem in a new and unusual light. Contrasted with vertical thinking."
Oxford Dictionaries

Reading material:

Domain knowledge

To be able to form any test you first need to, at least on a conceptual level, understand the product and its context.

I for instance like to think I'm pretty skilled with test techniques, that my critical and lateral thinking skills are decent and that I know how to get some good value out of tools such as risk catalogs. Still, when new to a product I'm not very effective until I've actually learned the product.

Stuff you can do to quicken the learning of a new application and its context:
  • Ask for a demonstration
  • Look through available documentation (start with documentation intended for customers)
  • Make a visual model of the application, enhance the model as you learn more
  • Tour the application
  • Ask for help to make a SFDIPOT analysis of the application
  • Learn about and try out the tech the application relies on
  • Speak with stakeholders to learn about e.g. the application's purpose and customers
  • Start somewhere and slowly work your way outwards, to avoid getting overwhelmed
  • Observe users, using the application
  • Pair up with developers, testers and others working with the application
  • Ask questions... lots of questions
  • Look into, and apply, various learning techniques (search for "learning techniques")
  • Learn about relevant laws, regulations, standards, certifications and business practices
  • Speak with domain experts, sales people and customer service
  • Look at test documentation available for the application

Other tools

Test framing
Test framing is about being aware of what you're testing and why, or, as Michael Bolton describes it in the link below: "link each testing activity with the testing mission". I find this useful to practice as the awareness, in my experience, improves my ability to see potential risks, better focus on relevant test design and it actively challenge my critical thinking skills.

When you have a few ideas to test it's sometimes useful to look at those and ask:
What's the complete opposite of these ideas and could that potentially be a concern too?

What happens when several entries fit a delete condition?
Opposite: What happens when no entries fit a delete condition?

What if the network has limited speed, delaying requests?
Opposite: What happens if the network floods the application with requests?

How will a user with limited computer skills approach this?
Opposite: How will a tech-savvy user approach this?

Great additional sources
This is a collection of great test idea articles I've used for this article and/or when generating test ideas that just didn't fit into any of the other chapters:

Please help me improve this

Initially this was intended as "just another blog post" but it sort of took a life of its own and became a pretty big reference page.  I always appreciate comments but since I expect people to assume this is "the truth" more than usual, I will appreciate corrections and suggestions even more!

17 August 2016

PSL alumni Slack group

Problem Solving Leadership is a quite different course in that the time in Albuquerque is just a start rather than the learning experience itself. For this reason a Slack group has been set up to facilitate learning between PSL graduates from different classes rather than just one class as at a time; which is normally the case.

How to join the Slack group
To be able to register you first need an invite (this is how Slack works). Use the following form and I'll send you an invite:

If you feel like it's taking too long; feel free to contact me via Skype (@brickuz).

Important! I cannot control that every single person joining has actually attended PSL but beware! Joining without having participated will:
  1. Likely hurt your own experience when/if you attend.
  2. Not provide you with very much since you don't have any knowledge about the simulations referred to.
If you're a graduate and like the initiative I would very much appreciate if you spread the knowledge about this group to other participants in your class or other PSL graduates you know.

If you want a very brief background to who else was involved in starting this group, check out this Twitter thread.

Finally: This group is in no way limited to software testers, I just happen to be a software tester and my most efficient way to reach out is via my blog.

Take care and remember to watch out for the sheriff...

07 June 2016

Time estimates in testing, part 2


Before I begin: Thanks to feedback from especially Jari Laakso, I've found several flaws in my first post which was what I hoped for when I published it as the flaws help me understand estimation better. I will highlight some of them in this post but I also want to point out that part 1 has been updated several times since I first published it to clarify or correct content.

How to interpret the graphic below

Do notice that these are not polarities where you switch from one to the other, instead they are continua. I will however describe one rather specific scenario for each box for the sake of simplicity, a chance for you to see the differences between "each corner". My hope is these examples can work as inspiration/help for you to think different.

One final remark: I will provide all these examples as cross-functional scrum teams consisting of only developers and testers. Of course this is just one of many ways a team can be set up but I'll use it once again for the sake of simplicity.

1 - Estimate testing stories

Separated planning with detailed estimates.

In this scenario testers create their own stories, estimate them and get them prioritized as part of the regular scrum flow (the cross-functional team will now have a mix of stories relevant to developers/testers/others). This allows testers to create their own plan optimized for testing needs and I find it easier to motivate developers to help out with testing as needed as the only content left in the sprint might be test work (and this is very much visible) but...

Since planning testing even a week forward is rather hard in most scenarios (open bugs blocking stories, open bugs delaying stories, late code deliveries, level of quality differs from the expected etc.) I find this approach somewhat chaotic (even though you might expect the complete opposite).

A second challenge with this approach is the risk of creating a gap, within the team, between testers and developers. This, I think, is because if you actually try to follow the plan (test stories) you will lose some of the flexibility to drop something in favor of something else in order to minimize the feedback loop between feature implemented and feature tested.

Finally my experience is it takes a certain skill as a tester and/or a certain group of developers to actually make developers engage in the test part of the sprint planning, further increasing the risk of a gap between testers and developers.

For me, session based test management is an approach to exploratory testing worth looking into if you want a separate test planning, with estimates, without making it too expensive.

If you choose this approach, I would strongly recommend you to share sprint plannings, backlog groomings etc. so that everyone sits down together. It's easy to "optimize" so that testers have their groomings or that developers don't engaging in those, which is like asking for a disconnect to happen.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The project manager/product owner/manager/test lead who pretends/thinks testing can be controlled.

2 - Estimate dev stories

Shared planning with detailed estimates.

In this scenario testers and developers share stories (DoD includes both implementation and testing). A story is estimated both in regards of testing effort and developer effort for a combined estimate. A story is considered open until "someone" decide that the implementation, testing and bug fixing is done. That "someone" is typically the concerned testers and developers together but if they disagree or the estimate turns out to be way off, a product owner or similar might have to step in.

My experience is that this fosters a very close bond between testers and developer which typically has a positive effect on e.g. tester status, information flow and code related testability. Other benefits are developers have a better motivation (and help) to test themselves as their work is not over until testing is "finished".

However, since the testers are normally the minority it's easy they become the ones always losing votes and debates which may impact how they test, when they stop testing etc. and often not in a good way. Also since the testing is so closely tied to the development things like integration testing, general quality assessments (e.g. security, performance, consistency, test legacy for support of changed context etc.) and regression testing may be missed (regression can be helped by developers taking a more active role in creating automated smoketests but that's just a part of it).

One last problem is if you actually create new tester-developer hybrids and the pressure builds up (important release, budget cuts etc.) teams might trick themselves by moving more members into developing and tamper with their normal quality standards (which in a few cases might be the right choice but often not in my experience, often because the "temporary need" becomes permanent).

If you choose this approach, I would strongly recommend you to introduce activities such as system tests or release tests. In this activity, also involve developers as part testers, part bug fixers, so that a gap between developing and testing isn't created (testers not being able to provide feedback quick enough in the new sprint). A different method is to have teams solely dealing with integration testing (more waterfall like/support team) but for this to work there has to be strong information channels established between the cross-functional teams and the integration test teams.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The R&D manager/developer who doesn't really understand there's a need to test anything else than the code that was just written or the developer who rush the testing part since "that's not that interesting".

3 - Follow development planning

Shared planning without detailed estimates.

Very similar to the above but without time estimates of the testing. The difference that happens is you save a bit of money (less administration) but testers are a bit more detached from the development process (but not necessarily from the developers) and if estimates can actually be somewhat accurate you lose that as a planning benefit/potential delay indicator; in this scenario sprint plans assume testers will manage time limits if developer manage these which is of course not always true. But do remember that estimates are often not that accurate to begin with (described in part 1) so the actual effect of no test estimates might be small, personally I would most often worry more about the potential disconnect between testers and developers.

If you choose to approach testing like this, I would strongly recommend you to closely monitor so that testers participate actively in all the typical planning activities (groomings and sprint plans). This is to help build a team where testers and developer keep working closely together. The goal is to  "just" remove the part where testers perform time estimates, nothing else, in the planning meetings.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
Testers who simply dislike administration and don't even care to try estimates. A reason why this might be very troublesome is also because these testers could easily contribute to the disconnect between testers and developers in that they might want to use the removal of estimates to skip out on "irrelevant meetings" where key implementation decisions are taken.

4 - Report on/off track

Separate planning without detailed estimates.

In this scenario testers attend backlog groomings and sprint plans to stay up to date with the planning and the product as well as get attention on testability needs; but they have a complete separate planning done together with developers (costs time) or with testers only (may hurt developers' understanding of/respect for/interest in testing). From a project planning perspective testing "just exists" and whenever the team suspects there's a risk for the testing/bug fixing to be late this is reported and dealt with just like any other case where estimates are wrong or development delayed.

This separate planning would happen as soon as possible after the sprint plan is complete (so that the scope for the sprint is known). Testers discuss if there are any other activities they should plan other than the regular testing of stories. They also look into if stories should be tested together to avoid costly retests. This makes it easier to take testing needs into consideration without messing up the development planning and testers can make rapid changes in their plan without impacting development stories.

This has some interesting benefits but there's a great risk of testers forming a subteam within the team potentially hurting the developers interest and effort in testing (information flow should be less of a problem since groomings and sprint plans are key to be able to do the test planning).

If you choose this approach, I would strongly recommend you to leave a bit of time open for unplanned as bugs not related to development stories are more likely to be discovered and help testers need might not be taken into consideration when planning and thus that extra time has to still exist when testers need help. This might seem stupid but if you've been in a software project you know some sudden needs from testers (blocking bugs, needed support tool updates etc.) are hard to predict at the sprint start.

Finally the name for this is approach is a bit strange. If we're not estimating testing (at all) we can't possibly say if we're on track or off track. I thus want to again stress that in reality we always make estimates but rather than debating whether one of many tasks should be estimated to 3 or 5 hours we look at the whole picture and make a rough estimate if we'll make it or not in time.

If you choose to approach testing like this, be careful. Putting testers with low communication skills, reporting skills or unwillingness to make work transparent, in a team like this can lead to isolation and a negative mythic image of testing where developers neither really understand nor care about testing ("they don't seem to care for the planning and the team so why should we care about them").

Examples of two dangerous archetypes potentially advocating for this approach for "the wrong reasons":
Waterfall accustomed testers or developers who still haven't figured out how to work with testing in scrum and other agile approaches. Also testers who don't want to/have the ability to make their work reasonably transparent.

The actual implementation

In a real life scenario we move a bit in between two or more "squares"; for instance testers will sooner or later need to do work that only benefits testing, so completely following the developer's stories is a myth (or an example of a dysfunctional team). However, testers not feeling like they are allowed to test things outside the current development scope e.g. because of a big pressure to close stories, is something I've come across multiple times and with that said my goal is to highlight options. The examples in this post is meant to help you see alternatives; they're not models, ready to implement.

My hope is part 1 helped you understand the dynamics of estimates and planning while the second gave you some practical ideas to experiment with. Like in part 1 I want to emphasize that this is still thoughts in process but I can't wait forever to share them if I want to get feedback...

Finally: Thank you once again Jari Laakso for all your feedback on the first post!

26 May 2016

House of Test

Earlier this month I left House of Test. It was not because I disliked the company but because I was offered an amazing opportunity at another company I obviously love.

Since so many have asked me how it is to work for House of Test I hope giving my view, as an ex-employee, will shed some lights on what "the House" really is.

Consultancy is still consultancy
The first thing to make clear is that House of Test is still "just" a consultant company. Money doesn't magically materialize and as an employee you're expected to actively work to find or create new working opportunities and help with marketing the company. Also far from all assignments are glamorous and you won't actually meet your House of Test colleagues that often.

Growing pains
House of Test is also growing quite a bit right now which slowly lessens some of the challenges being a small consultant company (e.g. less pressure on each consultant to help market and hunt for assignments). But it also introduce new challenges related to e.g. communication and each employee's influence.

As with any growth there are growing pains as everyone tries to adjust to the new environment, nothing strange but also nothing magically gone because it's House of Test. One example would be finding a good format to better facilitate the growing numbers at the company's gatherings.

"I'm not good enough for House of Test"
The reason people ask about how it is to work for House of Test is because House of Test is pretty famous within the (context driven) testing community. That in terms creates an interesting challenge (which is also partly why I write this post):

Great people not thinking they are good enough to work for House of Test.

This might sound silly to some but it's been a serious recruitment issue for House of Test and only since I joined two years ago I've heard "I don't think I'm good enough" more than a couple of times from great testers I know.

The truth: House of Test is filled with passionate testers, many of which have made a name for themselves by speaking at conferences (we'll get back to this), doing well in testing challenges or being otherwise active in the testing community. The common denominator is not godlike testing skills however; it's passion and a willingness to learn. Many of the testers in House of Test started fairly junior but has grown for reasons I will later explain and even so, many of the current ones are still fairly early in their development.

... meaning you're good enough, just to make that clear.

Education as a focal point
The one thing I think differentiates House of Test is their view on education and learning.

Internally conferences, courses and other learning activities are probably the single most discussed topic and call for paper reminders are posted for basically every (major) testing conference on the planet. This is reinforced by management in many ways, one being that I don't think I've ever heard them say e.g. "you would miss too much time from your assignment" or "the timing is bad" when someone requests to go to a conference or course... of course there's a maximum budget for education but that budget is also through the roof compared to any other company I've been in contact with.

Another interesting thing is how the top end education is preferred over the cheaper, local alternatives. Many hotties (myself included) have went to, for instance, PSL in Albuquerque, CAST in various places in the US, RST no matter where it's available and Let's Test, simply because they are considered to be the best available. This focus on quality rather than cost is something I, now being an ex-hottie, will definitely bring with me.

On the flip side there's an expectation to want all this. It might go to a level of unhelpful pressure to send in abstracts or attend learning activities (including after work hours in some cases).

One final clarification though: A problem I've experienced in other companies is the pressure to immediately explain and demonstrate what you've learned from a course or conference. That doesn't exist at House of Test (my experience). The founders seem to understand/trust that it often takes plenty of time and/or specific contexts to fully grasp the value and that education is a long term investment in general.

To summarize:
My view is hotties improve their testing skills unusually quick simply because all the best tools to do so are introduced and available to them. For some this may create an unhelpful feeling of pressure to improve though.

Being spread out in "four and a half countries" (the half country is actually not Denmark; it's the globetrotters) combined with being rather small poses a major communication challenge. This works fairly well though...

House of Test has an active Slack chat, quarterly gatherings and at basically any major or local test conference you'll meet colleagues.

The gatherings themselves are worth mentioning as they are one of the greatest perks I've experienced. Imagine having a small conference packed with people like Maria Kedemo, Ilari Henrik Aergerter or Carsten Feilberg every three months. That's a pretty awesome employee benefit in itself! I don't know how the format or dynamics in these gatherings will be impacted by the company's growth but during my time they were simply amazing.

Is House of Test for you?
If you like to work with highly skilled professionals, having strong opinions and a willingness to debate basically anything, you will feel right at home. A willingness to learn is definitely required to function well and an interest to, at some point, stand on a stage and share your experiences, helps too but is not necessary.

If you just want a quiet 9 to 5 job, have troubles dealing with consultancy in general or disagree with the context driven testing principles you probably have less to gain from joining the House.

... that's the simplified explanation.

On one hand House of Test is just an ordinary consultant firm with, for instance, the need to hunt for assignments, the risk of ending up in less than optimal work places and distance to your closest colleagues.

On the other hand House of Test is like a wet dream for passionate testers; you will work with some of the best testers (and testing teachers) in the world, education will be a focal point and anything awesome happening in testing will be introduced to you the moment it becomes public... or often before.

Thank you House of Test for two awesome years!

11 April 2016

Time estimates in testing, part 1

Why this blog post

Activity planning and time estimation in testing is complex to begin with; combining that with cross-functional teams (having to actively take other's planning into consideration) and I've seen a lot of confusion. In this blog post I hope I can arm you with some useful ideas on how to deal with activity planning and time estimates in testing.

... and that turned out to be a lot bigger task than I anticipated so in this first part I will speak about the more overall concepts and in the second part I'll provide more concrete explanations of each approach (see the graphic below).

Key attributes

When I started listing various ways I've been working with estimates I continually came back to two variables that seemed central:
  1. To what degree does testers and developer share their planning?
  2. How much trust and effort is put into time estimates?
For this reason I made this little graphic:

Do note that this is still thoughts in process so I do appreciate any feedback on whether this makes sense or not.

Separate planning vs Shared planning

I've been in teams where testers were only or almost only expected to test what had just been implemented (as in testing the exact work described in the implementation stories thus sharing the planning with developers) as well as teams where testers were expected to create their own planning based on the testing needs; both extremes come with their own benefits and drawbacks:

Shared planning:
 + less administration (as in cheaper)
 + closer relationsship between testers and developers
 + quick feedback loops in general
 + easier to get focus on testability improvements in the code

Separate planning
 + more focus put on regression, system (e.g. consistency) and integration testing
 + easier to get work only affecting testers prioritized
 + planning can be optimized to fit testing

Two dangerous mind traps I've experienced:

Shared planning
Risk: Testing seen as something very simplistic where you just test the product's new features with no or little regard to the big picture.

Negative effect: General testing activities, such as overall performance testing, integration testing or regression testing are down-prioritized.

Implications: Bugs related to e.g. integration, unforeseen legacy impacts or product inconsistency.
Coping: To deal with this it takes testers or other team members who are good at advocating for testing in general and can motivate why certain test activities makes sense even though obviously impacting code is not changed.

Separated planning
Risk: Testers and developers move apart and create two separated subteams within the team.

Negative effect: Impacts information sharing, understanding of the other's work (which in terms may impact the respect for one another's profession) and developer's losing the feeling of responsibility for quality.

Implications: Prolonged feedback loop (code developed to bug reported to bug fixed to fix verified), worse quality on code handed over from development, testers lacking important information making them less effective/making them take bad decisions.
Coping: Well functioning team and social management, testers who're good at communicating their work and good at speaking with developers on the developers terms or vice versa, developers putting a great interest in testers and what they do.

I'll elaborate a bit more on this in part 2.

Effort put into estimates

Let's talk about the two extremes first.

When we put a lot of effort into estimates we may for instance split the work into small tasks to make them easier to estimate and try to give each an as exact time estimate as possible; e.g. "these 6 tasks are estimated to 3, 4, 6, 3, 1 and 9 hours so the time to test this feature should be around 26 hours".

Little effort are put into time estimates when we for instance accept an external time constrain, e.g. "get the testing ready for the release" or "finish your testing within this sprint" and we more inform the one setting the time constrain what we roughly think the quality of the testing will be at this point.

Important: "Quality of the testing" in this case refers to how deep/well we expect we will be able to cover the various features. This is not the same as the "quality of the product". Why the latter is not what is (typically) talked about is because that oe is way too complex to estimate and often out of what we can actively influence. For instance it's not our call if a bug we've identified should actually be fixed, that's up to the stakeholder(s).

At this point the former might seem more reasonable but it's not that simple...

Accuracy of estimates

There's no "done" in testing. This is sort of true for development as well, we can always continue to improve the code itself or polish the actual feature, but at least we can observe the product and say "this is good enough" and, important, that state of "done" can be fairly well-defined/described.

Even though some try to pretend it's not true; testing does not at all work like this.

Testing happens inside you head, development too (to a large degree); However, the product of testing also stays in your head and is relying on your ability to communicate it while the product of development can be easily observed by others. This complicates estimates since we cannot keep discussions around a fairly well defined end product; instead we have to leave it to everyone's interpretation of the task and do our best to communicate our own interpretation so we roughly estimate the same thing at least. For this to work at all, tester's first need consciousness of what they actually do, the necessary communication skills to explain this to others and the receivers must have enough understanding to interpret this message correctly. No human on earth (as far as I know) do even one of these things flawlessly (aka: communication is hard).

With no well defined "done" everyone will have to rely on their interpretations of what they think needs to be done and what they think the receiver asks for. That in terms will impact the estimate's accuracy but this's just part of the problem...

... the other part

On top of estimating a task we cannot clearly define we also have to estimate something that is inherently plagued with great levels of uncertainty:

The time it’ll take to test something depends on external factors such as the quality of the code, the complexity of the chosen design (which is often not set when estimates are done), the stability (up-time and robustness) of necessary environments, the help we'll get to correct problems found/blocking etc. To continue the reasoning the quality of the code depends on things such as how hard something is to implement, the skill level of the developer(s), how stressed they are (if a developer has a tight schedule she can typically stress the design a bit to get it out the door in time which in terms affect the quality which in terms affect the effort required to reach the abstract "done state" when testing) and stress level depends on even more factors. I can go on but let’s just say the time it takes to test something to the “abstract, hard to reasonably accurately define, level we desire” depends on so many external factors that the estimate's uncertainty is huge.


I once told a project manager this and he replied: "Bullshit! My testers ace their estimates almost every time!" (in a product with pretty crappy quality and expensive testing should be added). And here comes a conflicting truth:

We can't predict when we'll be "done" but we can be "done" by Wednesday if you want.

In development we need to produce some kind of agreed upon product based on, at least, our explicit requirements. Remember how testing did not have a defined "done"; well taken to its extreme we can (almost) say we're done at any point in time since it's unclear what done is anyhow: "This is the findings we've made so far (and are able to communicate). You told us to be "done" until today so we are "done" but our professional judgement is this needs more testing as the following areas are still considered unstable or only superficially tested...". In a sense we always do this, estimates or not, but don't even spend time trying to guess in the latter case, instead we do our best to put us in a good enough state when the time is up.

... and before someone talks about bad attitude or lack of professionalism; this is of course not how it's practically done. Some level of estimation is always performed but rather than saying: "We think this will take 10 hours, this 5 hours, this 2 hours and this 8 hours." we might say for instance "We think we're on track for the release but the quality at this point is unusually low so we're a bit worried this might change".

Planning based on time or effort

This topic is too big to cover in this post but a quick rundown.

When planning our testing we can either focus on effort or time. Effort means "we want to reach a state where we/someone else feels confident about this particular part before we leave it" while time means "we want to spend the fixed amount of time we have in a way so that we've looked at everything at least briefly". In the former we're doomed to not be "done" if something runs late and leave the project management with little choice but delay e.g. a release if they don't want to take on a considerable risk (leaving parts completely untested). But this also allows us to argue for the need of more testing better by saying "we haven't even looked at these four features" and we'll spend less time revisiting areas having to "relearn them" as an area is left more in a "done state".

In the latter we will hopefully have found at least the most obvious, serious problems in all areas of the product and can thus make a general assessment: "our testing is fairly shallow in most areas and we would like to spend more time with the product but from what we've seen all of the new features seems to be okey at a basic level". The drawback with this is it's harder to argue that we actually do need more time to test if stakeholders aren't well informed and comfortable with the concept of "uncertainty" as well as a greater risk of needing to revisit areas for further testing.

How does this relate to estimates? Well in my experience the effort approach prevails when we have estimates and plan our work around these actual stories since "we shouldn't" have too many open stories at once and when they are closed, they are closed so we have to "complete them". In the same way the time approach is more common when we skip estimates (my experience), at least when in combination with a separate test planning. If we have a set deadline (e.g. sprint end or release date) we can more naturally plan in a way so that we've at least looked at everything once.

I say this because most stakeholders I've been in contact with seem to prefer the time approach but still argue for the importance of estimates and these concepts seem a bit contradictory, at least in the context of testing. One last note: The contradiction problem still applies to development: If we get a bit of everything ready we will at least have some kind of product in the end but as described earlier: Since more has to be in place for a piece of software to make sense combined with that estimates should be easier to get accurate enough; the contradiction is not as big of a deal (my biased opinion, feel free to challenge this).

Wrap up of estimate or not

To wrap up:

Since time estimates of testing will always be plagued with great levels of uncertainty, no matter the estimation effort, the question is if detailed estimates really provide enough additional value to support their cost (both time and the risk of misleading stakeholder to believe we know when we'll be "done"). The "ideal" level of estimation is also highly context dependent, the context changes over time and we can't really objectively measure whether our approach is good or bad; so we'll have to experiment and rely on our ability to observe/analyze.

... and finally don't confuse time estimates with planning; Planning is made in both cases and the effort spent on planning has no/little correlation to the effort spent estimating (my experience).

Stay tuned for part 2...

25 August 2015

How to practice software testing

During open season after Erica Walker's presentation at CAST, I mentioned a few useful tools for practicing software testing or software testing related skills (rather than passively watch/read/listen). With this blog post I want to expand that a bit and share some of the applications/sources I've found useful when actually practicing to become a better software tester.


Bitnami provides simple installers (local install) for several well-known web applications such as Wordpress, Moodle and ExoPlatform. The installer automatically sets up a web server, database server and the application itself. This is a great sandboxed environment for you to play with and you have access to the application's code and database content allowing you to do pretty nifty stuff.

Since the applications available on Bitnami are fairly large systems you'll find opportunities to focus your testing on basically any quality characteristic or test technique no matter which one you choose. Why not try a full equivalence partitioning, value selection and variable combination testing table for the post thread as moderator form in phpBB or a usability analysis of PrestoShop?

The drawback with these big applications may be that they are a bit intimidating/take time to learn. In that case try the many software download sites like SoftPedia but be aware that some software you find this way might come with various malware.

Joining open source projects can also be a good way of practicing testing while also giving something back. Popular open source code management sites like GitHub and SourceForge are great places to look for applications in need of testers.


Install XAMPP (fully preconfigured web server) and start running queries against the MySQL server. This also gives you the ability to practice writing (and running) simple scripts in e.g. PHP to manipulate/display the database content. Getting familiar with phpMyAdmin (preinstalled) is also a good idea for any web application tester.

If you want to practice online I recommend Head First Labs. You might need a MySQL reference (available online) though to solve all their exercises since they reference to pages in an O'Reilly book.


A great place to take your first few steps in API testing is predic8. They have an online REST-API available that you're free to play around with. I recommend fetching Postman and just start making simple GET requests. Use predic8's tutorial to help you progress.


Tons of applications exist for the sole purpose of practicing security testing. These applications have dozens of vulnerabilities built in so that you can practice triggering and exploiting these vulnerabilities without risking to break anything. Also, many of these applications have active communities built around them where you can get help or documentation, explaining the various vulnerabilities.

WebGoat (web testing, local installation)
Have only briefly used this but from what I've understood this might be the best choice available. If you search for WebGoat on YouTube you'll find dozens of tutorials, demonstration and installation videos.

Google Gruyere (web testing, online)
I've played around in Google Gruyere quite a bit. It's a good place to start and convenient since no installation is required. Also, due to it's fame, several videos exist demonstrating vulnerabilities in Google Gruyere and explaining the thinking behind discovering them. One example is Alan Richardson's video.

bWAPP (web testing, local installation)
Only briefly used bWAPP but seemed like it had potential. bWAPP is more helpful than Google Gruyere in the sense that you're informed about what vulnerability each page has.

BodgeIt Store (web testing, local installation)
A web security practice application aimed towards beginners (if I interpreted the description correctly). Haven't tried this one myself.

Mutillidae (web testing, local installation)
One more I haven't tried myself. What I liked in the description though was it did claim to give hints which likely make it a good starting challenge to a new penetration tester.

GameOver (web testing, VirtualBox image)
I discovered GameOver as I was writing this blog post. Haven't tried it yet but it's a VirtualBox image with several penetration testing tools and web security practice applications preinstalled (such as WebGoat). Convinient!

There are also pages dedicated to learning web security by giving the visitor progressively harder challenges (puzzles) to solve. My personal favorite is HackThisSite as I think the challenges are progressing in a good pace and you can always get help if you're stuck. For a quite extensive list of practice pages, take at look at the top answer to this Stack Overflow question.

If you want to practice system level penetration testing I recommend the Metasploit Unleashed free course. Also look into Kali Linux, a Linux distribution centered around penetration testing.

Information gathering

Information gathering is a critical skill for testers. A course I've just started that seems to have great potential is the Google's Power Searching course. The course comes with challenges making it interactive enough to fit this blog post.

Improve your skills in other common tools

You can improve your skills in many common tools by using training videos released by vendors or users and either mimic what's done in the videos or perform the challenges given. One example of a training video collection I've found useful is Microsoft's Office course.

Operating systems

I learned a lot about operating systems in general when I started playing around with Linux. It's a fun way to start and the amount of help you can get when stuck is mind-boggling. If you have some previous experience; give Arch Linux a chance. If you're new something like Sabayon might be at the right level. Popular desktop releases such as Ubuntu may be a bit hard to get "under the hood" in but for a first timer just seeing a different operating system might be enough. In that case, go with OpenSuse or any of the Ubuntu derivatives (e.g. Ubuntu itself, Linux Mint or Elementary OS).

If you don't want to tinker with partitioning; use VirtualBox.

Networks and servers

Plenty of material is available online and practicing it is generally just about tinkering with your own home network, e.g. figuring out what various router configuration options do. You don't need an expensive lab to practice network administration and server setup; two devices (e.g. a computer and smartphone) and a basic network connecting them (e.g. a router with a built in wireless switch) is enough. If you feel like you don't know where to start, use for instance the course page linked to in this chapter and try the concepts described (like port forwarding, dhcp and file servers). I personally find network monitoring to be a particularly useful topic for testers.


The two most important messages I want you to remember:
  1. Do practice, it's important
  2. ... and not that hard
Good luck and if you have additional suggestions, just share them in the comment section below!

07 August 2015

CAST 2015 - A quick summary

Every conference creates some kind of special memory for me; Let's Test 2013 was meeting Helena Jeret-Mäe for the first time (the most important event so far for me as a tester) but also Richard Robinson and his Miagi Do challenge. CAST 2013 was my first talk which was special but also Dawn Haynes' magical keynote. Let's Test 2014 was barefoot introduction and NTD 2015 was the Pekka Marjamäki show. Also, as an important bonus both Let's Test 2014 and NTD this year, I had Helena; meeting her in person is always very special for me.

So what about CAST 2015? Well there were three people that stood out to me (in no particular order):

1) Ioana Serban caught my attention at Let's Test 2014 and at CAST she spoke for the first time... and she nailed it! The talk had high value content, the most awesome slide deck I've ever seen and she packaged all this as an entertaining and compelling story! I feel fortunate to be one of the people in the room that got to experience it live! On top of that she’s a smart, wonderful person I just enjoy being around!

2) Diana Wendruff has this lovely, sparkling personality that makes me happy just by being around. Her humble curiosity, charming humor and clever insights on top of a very empathic core is nothing but awesome. I so look forward to meeting her again! Oh, and she has the coolest business cards, ask for one when you meet her!

3) David Leach (referred to as Kiwi-David in my tweets) was one of those who just made the whole conference better for everyone. He was a first timer but gave a great lightning talk (the guy can present!) and even more importantly: He's incredibly skilled at asking good questions during open season, that created massive amounts of extra value for both speakers and attendees (probably the most active participant during open season throughout the whole conference). On top of that he's smart and curious. Thank you Dee Ann for help bringing him there and thank you David for making a great conference even greater!

Oh, one more... I want to highlight the Speak Easy initiative. I happened to go to three of the Speak Easy presenters and I sincerely think that was the three best talks I attended, which is absolutely crazy considering the little experience these speakers have. I've already talked about Ioana, the other two were Kate Falanga (talking about understanding the brand you create for yourself) and Jessica Ingrassellino (taking about the art of asking questions). I also heard great reviews about Carol Brands’ talk (which I missed). I was amazed!

I could keep name dropping forever (Perze, Taylor, Pete, Liz, Mark, Jessica, Roxanne, Dawn of course...) but instead I'll stop and just say: THANK YOU everyone who made this conference amazing! And an extra thank you to the people who not only attended my talk, but made it better by adding to it during open season and after. Oh, and to all the organizers (including facilitators and the staff in the reception), I’m so impressed by the effort you put in, thank you!

To finish off with something more useful; here are my top three takeaways from CAST 2015:

1) You can turn a "boring" (but important/valuable) topic into an entertaining story! Probably my number one takeaway and something I'll definitely use, thank you Ioana!

2) Nicholas Bolton (please correct my spelling Niclas, Nicolas...) shared this wonderful analogy with me: When trying to decide what solution to go for you sometimes have to look at it as being in a maze; if you run around like a headless chicken you'll (likely) not find your way out in time and die. On the other hand; if you stand still and only debate where to go, you’ll starve to death having accomplished nothing. At some point you have to stop arguing about which solution to go with and actually try one.

3) Helena introduced me to an interesting problem prior to CAST: She, and I, are both getting to a level where we can sort of look down at everything in an organization and actually have the authority to deal with problems we see at various levels. However, we can easily identify problems that would occupy 5000% of our time for the rest of our lives as well as impact people in extremely complex ways. So how do we choose where to put our effort?
Roxanne (congratulations to becoming a board member!) commented that maybe that problem partly comes from the fact that we're new to this position/perspective; we probably felt the same way once upon a time, with the "lower level stuff" (but experience have thought us how to navigate that). My take on her comment is; maybe we should worry less about "figuring out what is right" cause we don't have the necessary experience and understanding yet, instead we should head out and explore with learning being the main objective. Aiming to solve "the right thing" currently just makes us stall and feel inadequate (ties back to takeaway two) which isn't helpful.

So, once again, THANK YOU! I hope to see all of you soon again! CAST 2015 was awesome!

29 June 2015

Digital K-Cards

The page is taken down since I felt I didn't need a web server anymore.

If you want the code, drop a comment or contact me via Twitter / LinkedIn.

A few days ago I was still in the Alps with my colleagues at House of Test, mixing some recreation time with test discussions. During one of these discussions we had a lot of parallel tracks going on and the immediate question when people raised their hand was "is this on the current thread or is it a new one?" which took a lot of focus away from the topic. Some of us started improvising K-Cards. I used a background color app which cycled through a dozen different colors, Carsten had something similar and Ilari used two different glasses of beer.

After this discussion me and Lars Sjödahl joked about creating a K-Card app. A few hours of programming later and the K-Card "app" was born (click anywhere on the card to change color):
K-Card example: http://brickarp.se/kcard/index.htm?text=5&image=http://brickarp.se/kcard/hot.png
K-Card information: http://brickarp.se/kcard/info.htm
K-Card setup form: http://brickarp.se/kcard/setup.htm
K-Card generator: http://brickarp.se/kcard/get_card.php

Main features:
  • Turns your phone (or similar) into a fullscreen K-card.
  • Card number, logo on the card and whether or not to include the rat hole card can be customised.
  • Can be used offline (no server components for the actual card).
  • A card generator created for e.g. conferences helps provide unique card numbers for all attendees.
If you have suggestions, questions or find bugs, leave a comment or contact me in any other way you prefer.

Finally: Thank you Andreas Cederholm, for spending a few moments to test the app!
Notice though: It's probably still quite buggy due to several changes after Andreas tested it.


24 March 2015

Testing Education: Curriculum

I promised a long time ago I would write about each of the subjects in the testing education in detail. I've learned that's not as straight forward as I thought. I need to be careful about what I say/imply about the students, course content, exercises etc. (law, policies, respect for the school (content ownership) etc.).

Due to that I lost the energy to do the detailed posts. But, since I receive a lot of questions about curriculum, here's a post describing the curriculum on a more general level. The subjects below are arranged in the same order as they are taught to the students but the items in each of the bullet lists are not sorted in any particular way. Also the lists are not complete in any way, they're basically what I find noteworthy among all the big and small things brought up. Finally from test design and forward it's not as detailed since those subjects haven't been finished/started yet.

Introduction to testing (4 weeks)

  • What is testing and why do we test software?
  • Note taking and Visualization (mainly mind maps)
  • Heuristics
  • Basic test design
  • Risk
  • Oracles
  • Coverage
  • Bugs and bug reports
  • When are you done?
  • Tools (e.g. Sikuli, Selenium, VirtualBox, Apache and jMeter)
We started this course by; first thing, first day; give the students a flash game to test, a basic testing mission and a bare minimum of introduction to get started, just to let them get a minimal practical idea of what testing is. The rest of this course was a combination of theory and practical testing; sometimes we started with letting them test and then explained some concept they had used (e.g. oracles) and sometimes we did the other way around; e.g. when explaining risk we first gave a lecture and then let them create a simple risk analysis of an application before they actually got to test it.

The testing in this course was (of course) very superficial and the goal was to introduce many different concepts to give them a "foundation" rather than focus on teaching them one part really well, All in all the students, in one way or another, worked with ~10 different applications including desktop (Windows/iOS depending on their laptop), web, mobile and, in some cases, Linux desktop applications using VirtualBox.

You have to remember that the students in many cases came fresh from high school and/or did not have any technical background so it was just a brief introduction to the concepts and tools mentioned.

Think like a tester (6 weeks)

  • Critical thinking
  • Lateral thinking
  • Heuristics
  • Bias and Fallacies
  • Problem solving
  • Test polarities
  • Models and mental models
  • Information gathering/learning
The general setup was similar to the one used in the introduction course however, during the "think like a tester" course we added a lot of general exercises (e.g. what can you do with a brick, the alien tour exercise and many other) to compliment the theory and practical testing.

During this course, James Bach visited the class in Malmö and my class in Örebro joined in via video. A great opportunity for the students to see one of the experts often referenced, in real life. The highlight was James testing the same application as the students had tested as part of their examination for the introduction course. Malmö had several visits from prominent testers (thanks to Öredev) but I leave it to Martin and Maria to speak about those.

Project contexts (3 weeks)

  • Lean and Agile
  • Scrum and Kanban
  • Waterfall and V/W-modell
  • Outsourcing
  • Testing in agile
  • Common challenges for testers
The most interesting part of this course was probably a pretty detailed lecture and discussion on agile testing followed up by a big exercise where students were asked to identify risk in various contexts (like isolation etc.) and what to do to mitigate these/solve the problem.

Programming (6 weeks)

  • Java
  • Automation
  • File systems
  • Programming theory (e.g. compilers, bin/hex and memory allocation)
  • TDD/unit tests
Most of this course was used by the students to program their own testing tool (tool to generate and interpret test data). I'm working on publishing some of them. This has by far been the most challenging course making students work day and night to get their examination applications ready.

Test methods (4 weeks)

  • Schools of testing
  • TMap
  • Context driven testing
  • Myths
We teach a context driven mindset and the primary objective with this course was for the students to learn about other ways of looking at testing. Most focus was spent on ISTQB since it's so common and the students got to read the syllabus in detail as well as discuss its strengths, weaknesses, goals etc. in true "critical evaluation style".

Test design (24 weeks, 50%)

  • Test approaches
  • Test techniques
  • Technology
  • Heuristics
  • Risk based testing
  • Coverage
  • Oracles
  • When to use what?
  • Security testing
  • Performance and reliability testing
  • Usability and UX testing
This course runs in parallel with the test strategy, bug reporting and test reporting courses described below and is (by far) the biggest course in the education. The goal is to make it rather practical letting the students use their newly acquired knowledge as quick and much as possible, thus the courses will kind of intertwine as it would be pretty wasteful/fabricated not to do test design, test strategy and test reporting when practicing bug reporting etc.

Test strategy (12 weeks, 50%)

  • Risk based test management
  • Heuristic test strategy model
  • Visualisation
  • Testability
  • Test framing
  • Test processes
This is the other course starting this week. The most important goal is to make students comfortable when requested to create or present their strategy.

Bug reporting (6 weeks, 50%)

  • Content
  • Different receivers/readers
  • Bug pinpointing
  • Reasons not to correct errors
  • Oracles
  • Bug trackers
  • Style and structure
  • What's a bug and what is not?

Test reporting (6 weeks, 50%)

  • Rhetoric
  • What are we reporting and why?
  • Metrics
  • How not to misguide the reader
  • Style and structure
  • Different receivers/readers/listeners

Testing tools (6 weeks)

  • Various categories of tools
  • Introduction to common tools
The main reason for this course is students in earlier vocal university software testing educations (not taught by us) felt practical testing and a basic knowledge of tools were the biggest parts lacking in their education. Apart from tools, being the last course we will have to spare some time to talk about practical tips and tricks preparing the students for the internship and working full time as testers.

Internship (8 weeks)

To finish off the education students are sent on internship at various companies. If you're interested in taking on a student, feel free to contact me in whatever way you prefer (email, Twitter, LinkedIn, as comment, buy me a beer at some conference etc,). By the way; I've not forgotten about you Karlo, students have been informed.

Learning more

You can read about the education (Swedish) on EC Education's homepage. If you have any further questions or are interested in attending the next iteration of the education (Malmö or Örebro) in September, don't hesitate to ask.