28 March 2014

What happens if...

Intro

The quality categorization below derives from the work of James Bach / the Test Eye.

What happens if...

... we don't care about security

Internet is playing grounds for brilliant people who like to figure out how to get passed applications' moats or bring them down. Leaked credit card numbers, leaked personal or other sensitive information, leaked passwords, malware on the customers' computers, defaced company websites and services unavailable, all due to security shortcomings in software happens daily. This can lead to lawsuits, loss of customers, the company losing a lot of money or, worst case, bankruptcy.

Even if your application is not using network communication, no one will care the day someone hacks/steals a computer with (or even worse, due to) your app and get their hands on sensitive information. Also few (read no) customers will care if it was a third party product that failed or not. The "we didn't build it so we don't have to test it" mentality is simply not productive.

... we don't care about charisma

The amount of applications is huge and that has changed how we pick our preferred ones. Now you can install an application, look at it, dislike its design (not professional, fresh, modern, elegant, beautiful, simple, safe... looking enough) and quickly move on to the next, without even "using it". I repeat, you can lose customers simply because your product doesn't look like the product it is!

"It's just an [pick your type of application], no one will care about how it looks" is rarely true anymore. Also remember charisma is not just "cool design features", it can mean anything from simple/elegant to looking robust/secure (we do judge an application by its cover).

... we don't care about usability

Just because someone initially choose your application doesn't mean they'll stick with it. One annoying extra step or interrupt in their work flow might be enough to trigger an "uninstall and move on". Even if customers are tolerant and the competition isn't that fierce, too bad usability can render key features useless (thus wasted money developing).

Also, usability problems might add a lot of costs for customer support as they'll have to spend time explaining things to the customers that the product should make obvious.

One big enemy to usability in my experience is programmer shortcuts. "Well the framework doesn't really support dropdowns so we had to go with five links instead". Beware of just accepting this, usually technology isn't the limiting factor even though it might feel like that when in the middle of things (my experience as a tester and a programmer).

... we don't care about robustness

A crash or loss of service, even for a brief moment, can be enough to get your application uninstalled. Also, downtime can be extremely costly for customers and, worst case, a customer may not just stop using the product but also demand compensation. No matter what, common/long down times and other robustness problems can quickly hurt the company's/product's reputation. Remember, time is money.

One mistake often done is many testers just open the application, do their tests and close it down/reset. Customers on the other hand often work for hours or even weeks without closing stuff down, can your application handle that? Are you sure?

... we don't care about performance

We are an inpatient, stressed society, if the loading of a webpage/application/service is not quick enough we'll simply not wait to see if the result is pleasing or not. Slow apps are quickly gone.

Once again some experience: If a function is really slow and can't be tweaked to go faster (e.g. a complex algorithm), maybe it shouldn't be in the product at all even though it seems valuable. Or the algorithm should be simplified or changed to become some kind of background job. All this to avoid frustrated users (if a function is in the product it's expected to work and work well!).

... we don't care about compatibility

I happen to like my browser. If you don't support that browser (support means, work better than the competitors, not "well, it does but...") I won't even consider you an option... And by the way, the world is going mobile and Windows is no longer "the only" operating system on the market.

So what happens if you "support Windows, iOS and Linux" but only Windows is actually tested and in reality your product hasn't even started on Linux since three versions back? Well, a customer could demand money back, which adds administrative costs, or compensation for lost money installing/upgrading. A company buying your product might ditch you on all platforms due to distrust or you might be headlining a lot more blogs than you wished for (or rather, not for the reasons you wished for). Simply put, it would be considered lying and lying is rarely appreciated. Also shortening the list of compatible systems due to uncertainty (due to lakc of testing) is of course a deal breaker for potential customers on those systems.

... we don't care about IT-ability

Every install/reinstall and update/upgrade (or lack thereof) is a potential source of frustration and incentive to uninstall. Make sure you avoid any unnecessary friction or even a great application may find itself deleted.

Also in bigger companies there's likely someone maintaining/supporting your product internally. Those guys tend to have quite a bit of saying when it comes to selecting applications, so you don't want your application/brand/company to be on their black list. "It seems like a great product but out IT department says it's impossible to have in our software infrastructure".

Make time

We all have limited resources and have to prioritize. What I'm trying to explain above is not "we need to double the amount of testers" but rather "we need to prioritize testing and make sure the right people focus on the right stuff".

Of course a big part of this is arguing for the importance of quality, not just for the functions themselves. I hope the list above can aid you in this.

Programmers

Many things are better tested by programmers (knowledge about what has been implemented and ability to manipulate code for targeted, temporary testability) and no matter what, a simple bug caught by a programmer and solved immediately is a lot less time consuming than if a tester finds it, has to report it, may be stopped until it's fixed/application rebuilt, the programmer has to stop and context switching likely happens for both parties. Don't underestimate the saved cost of programmers that test their own code well and (almost) always hands over high quality stuff!

Product owners

My product owner does an awesome testing job! What he does is simply making sure what he gets is what he asked for and since he wrote the requirement/specification/user story that lead to the code he knows what he actually meant rather than what can be interpreted from the text (meaning is often lost/changed when thoughts are translated to written language). His job saves me (tester) a lot of time figuring out if he actually means what I interpret he means.

Domain experts

In my current company, much of the technical complexity of our main product is in the handling of security cameras. In this area I'm glad we have a great domain expert who both tests and technically supports my testing when needed. This once again saves me time to focus specifically on e.g. security in this area, which I have more knowledge about.

Combined effort

I (let's hope so at least) have a broader knowledge about testing and potential ways to put the system in risk of failing. However, programmers, product owners, domain experts, graphical designers/interaction designers etc. know their own domains much better. Teaming up when testing is rarely a bad idea as it's a way for both parties to learn from each other and better utilize the strengths of each person (e.g. the domain expert shows various ways to feed data into the system while the tester suggests various kinds of data to try).

Not everything can be tested in isolation

Testing "what has just been developed" is not enough (which some Scrum fans seems to find provoking). Often when testing we simply need to zoom out, for several reasons:
  • E.g. Consistency, flow (as a usability aspect) and data flow (as in how data is transferred through the product) requires you too look at the whole, or a big part of the, product.
  • We learn new things all the time which also concerns old functionality and a difference between programming and testing is testers don't need to make expensive refactoring to apply new knowledge on what already exists.
  • We test how new functionality integrates with old, and sometimes those new changes also affect how old functionality integrates with other old functionality.
  • Security, performance, robustness etc. is not always efficient to test for a specific function, instead you need regular sessions to test it for all functionality in a sprint, from the last month or, once again, the whole product.
  • You don't always know what existing parts new functionality impacts and programmers are not always aware of what might, even on a code level, affect existing functionality (and if they are it's not necessarily communicated). Thus we need to simply "look over the whole thing" or look at a more general area every once in a while.
  • ... and when looking at old functionality we inevitably find issues we've missed before due to new tests, new knowledge, changed context etc.
What sums this up I think is: A lot of what's needed to test is not efficient/possible to test isolated. It sure sucks to correct one year old bugs as a programmer, but you need to understand: Testing is not programming nor is it "extended unit tests". To a tester, modules interact and form something together, they aren't isolated blocks with api:s... Oh, and by the way testers and programmers, never expect two sides to have interpreted an api-spec the same way.

A way to solve this in e.g. Scrum is to add stories aimed towards a kind of testing rather than a functionality to test e.g. "Security survey" or "User tests/usability" and if needed, compensate this with less testing of each specific function developed (which requires more testing work, as well as better communicated testing work, by programmers and others). Another solution is to add "system test" periods before release where the product as a whole is tested with not too much regard to what has just been developed.

Let's make this a requirement

Some of this can sure be requirements and/or added to a specification, and I'm not saying it doesn't help, because I think having people aware of e.g. the quality characteristics, writing the requirements/specifications is great. But a specification will never include everything simply because even if you have all your experts doing their best, a product changes from the spec, reality is never as problem free as planning and the context (users, market windows, staff etc.) will change in one way or another as the project progress.

So, yes, I think considering quality characteristics when writing specification/requirements is great but, no, it will sure not cover everything.

Customers never ask for these things!

True, in many cases they don't explicitly ask for things like security and usability. But it's not because they don't care, it's because they simply assume it will be there and if you do deliver and don't get any feedback about bad quality, do all your customers come back? Do they recommend you to friends and business colleagues, creating new business opportunities?

And sure, if you care less about security you might save some money and still never end up in trouble. But if that chance taking doesn't go as planned it might, as already said, put you out of business. So make sure you understand the risks really well before taking such a chance.

Summary

Customers will not stick to your products (except in rare cases, and you're not one of them) and your reputation will not be impressive if your products have low quality. And what great quality means can never be fully predicted or documented so let's not kid ourselves and say checking based on the specification is enough.

Make time by explaining consequences (thus getting the right attention), educate the people around you and making sure everyone do their share of the testing job!

Good luck!

1 comment:

  1. This was great reading. Having people around you who cares is extremely valuable. Unfortunately I think that part is difficult to teach, it's more a mindset. There is one way though, to help others having this "aha-feeling", and you hit it in your summary: "Making time by explaining. ..". By taking time to reflect and asking the right type of questions, you help others to think for themselves and eventually a mindset where people cares is grown.

    Keep up writing!

    ReplyDelete