28 March 2014

What happens if...

Intro

The quality categorization below derives from the work of James Bach / the Test Eye.

What happens if...

... we don't care about security

Internet is playing grounds for brilliant people who like to figure out how to get passed applications' moats or bring them down. Leaked credit card numbers, leaked personal or other sensitive information, leaked passwords, malware on the customers' computers, defaced company websites and services unavailable, all due to security shortcomings in software happens daily. This can lead to lawsuits, loss of customers, the company losing a lot of money or, worst case, bankruptcy.

Even if your application is not using network communication, no one will care the day someone hacks/steals a computer with (or even worse, due to) your app and get their hands on sensitive information. Also few (read no) customers will care if it was a third party product that failed or not. The "we didn't build it so we don't have to test it" mentality is simply not productive.

... we don't care about charisma

The amount of applications is huge and that has changed how we pick our preferred ones. Now you can install an application, look at it, dislike its design (not professional, fresh, modern, elegant, beautiful, simple, safe... looking enough) and quickly move on to the next, without even "using it". I repeat, you can lose customers simply because your product doesn't look like the product it is!

"It's just an [pick your type of application], no one will care about how it looks" is rarely true anymore. Also remember charisma is not just "cool design features", it can mean anything from simple/elegant to looking robust/secure (we do judge an application by its cover).

... we don't care about usability

Just because someone initially choose your application doesn't mean they'll stick with it. One annoying extra step or interrupt in their work flow might be enough to trigger an "uninstall and move on". Even if customers are tolerant and the competition isn't that fierce, too bad usability can render key features useless (thus wasted money developing).

Also, usability problems might add a lot of costs for customer support as they'll have to spend time explaining things to the customers that the product should make obvious.

One big enemy to usability in my experience is programmer shortcuts. "Well the framework doesn't really support dropdowns so we had to go with five links instead". Beware of just accepting this, usually technology isn't the limiting factor even though it might feel like that when in the middle of things (my experience as a tester and a programmer).

... we don't care about robustness

A crash or loss of service, even for a brief moment, can be enough to get your application uninstalled. Also, downtime can be extremely costly for customers and, worst case, a customer may not just stop using the product but also demand compensation. No matter what, common/long down times and other robustness problems can quickly hurt the company's/product's reputation. Remember, time is money.

One mistake often done is many testers just open the application, do their tests and close it down/reset. Customers on the other hand often work for hours or even weeks without closing stuff down, can your application handle that? Are you sure?

... we don't care about performance

We are an inpatient, stressed society, if the loading of a webpage/application/service is not quick enough we'll simply not wait to see if the result is pleasing or not. Slow apps are quickly gone.

Once again some experience: If a function is really slow and can't be tweaked to go faster (e.g. a complex algorithm), maybe it shouldn't be in the product at all even though it seems valuable. Or the algorithm should be simplified or changed to become some kind of background job. All this to avoid frustrated users (if a function is in the product it's expected to work and work well!).

... we don't care about compatibility

I happen to like my browser. If you don't support that browser (support means, work better than the competitors, not "well, it does but...") I won't even consider you an option... And by the way, the world is going mobile and Windows is no longer "the only" operating system on the market.

So what happens if you "support Windows, iOS and Linux" but only Windows is actually tested and in reality your product hasn't even started on Linux since three versions back? Well, a customer could demand money back, which adds administrative costs, or compensation for lost money installing/upgrading. A company buying your product might ditch you on all platforms due to distrust or you might be headlining a lot more blogs than you wished for (or rather, not for the reasons you wished for). Simply put, it would be considered lying and lying is rarely appreciated. Also shortening the list of compatible systems due to uncertainty (due to lakc of testing) is of course a deal breaker for potential customers on those systems.

... we don't care about IT-ability

Every install/reinstall and update/upgrade (or lack thereof) is a potential source of frustration and incentive to uninstall. Make sure you avoid any unnecessary friction or even a great application may find itself deleted.

Also in bigger companies there's likely someone maintaining/supporting your product internally. Those guys tend to have quite a bit of saying when it comes to selecting applications, so you don't want your application/brand/company to be on their black list. "It seems like a great product but out IT department says it's impossible to have in our software infrastructure".

Make time

We all have limited resources and have to prioritize. What I'm trying to explain above is not "we need to double the amount of testers" but rather "we need to prioritize testing and make sure the right people focus on the right stuff".

Of course a big part of this is arguing for the importance of quality, not just for the functions themselves. I hope the list above can aid you in this.

Programmers

Many things are better tested by programmers (knowledge about what has been implemented and ability to manipulate code for targeted, temporary testability) and no matter what, a simple bug caught by a programmer and solved immediately is a lot less time consuming than if a tester finds it, has to report it, may be stopped until it's fixed/application rebuilt, the programmer has to stop and context switching likely happens for both parties. Don't underestimate the saved cost of programmers that test their own code well and (almost) always hands over high quality stuff!

Product owners

My product owner does an awesome testing job! What he does is simply making sure what he gets is what he asked for and since he wrote the requirement/specification/user story that lead to the code he knows what he actually meant rather than what can be interpreted from the text (meaning is often lost/changed when thoughts are translated to written language). His job saves me (tester) a lot of time figuring out if he actually means what I interpret he means.

Domain experts

In my current company, much of the technical complexity of our main product is in the handling of security cameras. In this area I'm glad we have a great domain expert who both tests and technically supports my testing when needed. This once again saves me time to focus specifically on e.g. security in this area, which I have more knowledge about.

Combined effort

I (let's hope so at least) have a broader knowledge about testing and potential ways to put the system in risk of failing. However, programmers, product owners, domain experts, graphical designers/interaction designers etc. know their own domains much better. Teaming up when testing is rarely a bad idea as it's a way for both parties to learn from each other and better utilize the strengths of each person (e.g. the domain expert shows various ways to feed data into the system while the tester suggests various kinds of data to try).

Not everything can be tested in isolation

Testing "what has just been developed" is not enough (which some Scrum fans seems to find provoking). Often when testing we simply need to zoom out, for several reasons:
  • E.g. Consistency, flow (as a usability aspect) and data flow (as in how data is transferred through the product) requires you too look at the whole, or a big part of the, product.
  • We learn new things all the time which also concerns old functionality and a difference between programming and testing is testers don't need to make expensive refactoring to apply new knowledge on what already exists.
  • We test how new functionality integrates with old, and sometimes those new changes also affect how old functionality integrates with other old functionality.
  • Security, performance, robustness etc. is not always efficient to test for a specific function, instead you need regular sessions to test it for all functionality in a sprint, from the last month or, once again, the whole product.
  • You don't always know what existing parts new functionality impacts and programmers are not always aware of what might, even on a code level, affect existing functionality (and if they are it's not necessarily communicated). Thus we need to simply "look over the whole thing" or look at a more general area every once in a while.
  • ... and when looking at old functionality we inevitably find issues we've missed before due to new tests, new knowledge, changed context etc.
What sums this up I think is: A lot of what's needed to test is not efficient/possible to test isolated. It sure sucks to correct one year old bugs as a programmer, but you need to understand: Testing is not programming nor is it "extended unit tests". To a tester, modules interact and form something together, they aren't isolated blocks with api:s... Oh, and by the way testers and programmers, never expect two sides to have interpreted an api-spec the same way.

A way to solve this in e.g. Scrum is to add stories aimed towards a kind of testing rather than a functionality to test e.g. "Security survey" or "User tests/usability" and if needed, compensate this with less testing of each specific function developed (which requires more testing work, as well as better communicated testing work, by programmers and others). Another solution is to add "system test" periods before release where the product as a whole is tested with not too much regard to what has just been developed.

Let's make this a requirement

Some of this can sure be requirements and/or added to a specification, and I'm not saying it doesn't help, because I think having people aware of e.g. the quality characteristics, writing the requirements/specifications is great. But a specification will never include everything simply because even if you have all your experts doing their best, a product changes from the spec, reality is never as problem free as planning and the context (users, market windows, staff etc.) will change in one way or another as the project progress.

So, yes, I think considering quality characteristics when writing specification/requirements is great but, no, it will sure not cover everything.

Customers never ask for these things!

True, in many cases they don't explicitly ask for things like security and usability. But it's not because they don't care, it's because they simply assume it will be there and if you do deliver and don't get any feedback about bad quality, do all your customers come back? Do they recommend you to friends and business colleagues, creating new business opportunities?

And sure, if you care less about security you might save some money and still never end up in trouble. But if that chance taking doesn't go as planned it might, as already said, put you out of business. So make sure you understand the risks really well before taking such a chance.

Summary

Customers will not stick to your products (except in rare cases, and you're not one of them) and your reputation will not be impressive if your products have low quality. And what great quality means can never be fully predicted or documented so let's not kid ourselves and say checking based on the specification is enough.

Make time by explaining consequences (thus getting the right attention), educate the people around you and making sure everyone do their share of the testing job!

Good luck!

20 March 2014

Lessons from testing my own code

The last 5 months I've spent almost every free minute developing a website for my kids' preschool. This blog post is devoted to observations done when testing my own code while creating it. Notice many of these are highly subjective and biased but if nothing else, I hope they can raise some interesting questions.



Observation 1: Finish developing (mentally) before testing
I found it hard to do good testing when I was in the middle of developing something. My brain seemed to be more focused on how to solve the next problem rather than critically question what existed.

On the other hand, when a function was done (both function and done is ambiguous, the key though is the feeling of being able to move on) I found it much easier to change my focus to "how can I improve this?" or "what are the problems with my solution?". I often setup a few tasks, completed those and then sat down and committed to make what already existed (including older stuff) awesome, before moving on planning the next tasks.

Lesson: Finish up mentally, before doing non-superficial testing.

Observation 2: Some testing was easier to do than other
Testing I do rather statically (e.g. following checklists, like OWASP checklist, or strict procedures) worked better than more creative or "subjective" testing (no right answer, more about judgment/opinion) compared to when I work with other people's products. In the latter case I often found it much harder to see the alternatives on how to solve problems and of course what I created was intuitive, right? .)

Two things I found useful when doing less static testing was scenarios and roles. When examining the product trying to observe it from the perspective of a certain user or had a certain task to perform I could better defocus and question the decisions I had already done... a bit like I was no longer responsible for the code "I'm the news editor now, not the developer".

Lesson: Scenarios and roles, I think, helped me fight "my solution is perfect" bias.

Lessons: I sometimes find checklists thought limiting when testing others' products but it did help to observe my own more critically/objectively.

Observation 3: Only testing new functionality was not as efficient
I discovered my "examine old functionality" sessions seemed to reveal more bugs, both in number and severity, compared to sessions focused on new and specific functionality. A few reasons for this, I think, is:
  1. I learned new things about the product/environment that inspired new tests and applying these to existing functionality often revealed bugs I had previously missed.
  2. I got some more distance to the functions making it easier to look at them as if I were a regular user, not the creator.
  3. I found my affection for the specific functionality (wanting my product to work) lowered over time. At some point it was like "I don't want this to work perfectly just to show to myself that I've gotten better as a programmer".
  4. Naturally I found it much easier to observe inconsistencies and problems with flows when looking at the product as a whole, not at a single function.
  5. I found it much more motivating to "play around" with the whole product and following my own energy seemed to make me a better tester than if I tried to "force myself" to test a specific part.
Lesson: Instead of spending all my time testing new functionality, spending a short session on new functionality and then add longer (or several shorter) general sessions every once in a while was a more effective way for me to spend my testing time.

Notice, I've observed a similar pattern at my job. Although, when being a tester in that context I also have to consider the fact that new functionality will be delivered to customers so I can't "save testing for later" as much and as a programmer I do prefer to work with code I've recently written. Still, scheduling time to just generally go over the whole product has often proved to be a valuable practice for me even when there's a lot of new functionality to look into!

Observation 4: Testing my own code really helped me think about testability
I think any company would benefit from pairing a programmer with a tester, not so much for the tester to get quick feedback on bugs or help with how some functionality is meant to work, but for the programmer to see what parts are hard to observe or control (read: test). Turning it around, as a tester it has been interesting to play with what's actually possible to achieve. Also, when I have the code in front of me I noticed I do a lot of things I would never had done on a "regular testing project", like disable functionality to enable better/quicker testing of back end stuff and making printouts I would normally "have to live without".

Testability, and especially ways to achieve/improve it, has been one of the most interesting lessons learned during this project, both from the perspective of me being a programmer and of me being a tester!

Lesson: Testers benefits from observing programmers/program as it helps understand what's possible and programmers benefit from observing testers/test as it helps understanding what makes a product testable (not to mention what strange ways users can "misuse" their creation .)

Observation 5: Programmers do a lot of testing better than testers
I found knowledge and control over the code invaluable, especially when testing "functional correctness":
  • I can manipulate code as I test
    e.g. remove client side form validation to quicker test server side validation
  • I know about technical solutions affecting testing
    e.g. due to a technical limitation we need to handle two seemingly identical requests differently
  • I know about the full implemented feature set, including "bonus functionality"
    e.g. no one requested drag and drop but it came with the framework, still it has to work
  • I know about the code quality/problems
    e.g. the user form handler is very messy, give it some extra testing attention
Communication can help much of this but it's hard as a programmer to know what a tester needs, it's hard as a tester to know what you need when you don't know what's available and it's hard to be fully aware of what you actually take into consideration when testing no matter if you're a programmer or a tester.

As the programmer you're of course likely to agree with your solution and test for the things you've already built protection against, simply because those are the problems/scenarios you're familiar with/have identified. Thus a second, third and forth opinion is important but don't underestimate the awesome bulk of unique knowledge the programmer possesses!

Lesson: Programmers testing their own code is extremely important, much of the testing they do will not be done by a tester and/or as efficiently.

Lesson: Pairing the programmer with a tester/test coach is probably awesome (in fact, I think it is).

Observation 6: Mixing up customer value and technical practicality
"That checkbox is not vertically aligned but I know it's really messy to fix so it's not important".

This kind of mind traps I found hard to fight against, even though I'm normally a tester.

The only real protection against it, that I found, was to force myself to solve any problem I found. In the beginning it was time consuming and demotivating (slowed the progress quite a bit) but soon I noticed it greatly improved my "base quality" as many of the problems were reoccurring until I dig in and learned how to solve them.

Lesson: Extremely low tolerance to faults really helps you improve! Of course I've had to let a few problems slide but I've always given them significant time, weighing the options/workarounds and tried to find consensus that a problem is impossible/not worth solving within the professional community. The stuff inconvenient to solve, technically, often provided me the big programmer "a-ha moments" when solved.

Summary
Forcing programmers to test their own code more will not make testers obsolete as well as adding more testers won't solve the problem where programmers aren't given enough time to test their code. Both are important and both add unique elements to testing!

I think programmers can severely affect their ability to test by how they behave/plan. For me, testing intensively while in development and trying to make "objective judgments", didn't work very well. Checklists and waiting with non-superficial testing until "mentally done", seemed to greatly improve my ability to test my own code.

So can programmers test their own code?
Yes, and they should but not alone for best result, that's my experience!

Agree?