Observation 1: Finish developing (mentally) before testing
I found it hard to do good testing when I was in the middle of developing something. My brain seemed to be more focused on how to solve the next problem rather than critically question what existed.
On the other hand, when a function was done (both function and done is ambiguous, the key though is the feeling of being able to move on) I found it much easier to change my focus to "how can I improve this?" or "what are the problems with my solution?". I often setup a few tasks, completed those and then sat down and committed to make what already existed (including older stuff) awesome, before moving on planning the next tasks.
Lesson: Finish up mentally, before doing non-superficial testing.
Observation 2: Some testing was easier to do than other
Testing I do rather statically (e.g. following checklists, like OWASP checklist, or strict procedures) worked better than more creative or "subjective" testing (no right answer, more about judgment/opinion) compared to when I work with other people's products. In the latter case I often found it much harder to see the alternatives on how to solve problems and of course what I created was intuitive, right? .)
Two things I found useful when doing less static testing was scenarios and roles. When examining the product trying to observe it from the perspective of a certain user or had a certain task to perform I could better defocus and question the decisions I had already done... a bit like I was no longer responsible for the code "I'm the news editor now, not the developer".
Lesson: Scenarios and roles, I think, helped me fight "my solution is perfect" bias.
Lessons: I sometimes find checklists thought limiting when testing others' products but it did help to observe my own more critically/objectively.
Observation 3: Only testing new functionality was not as efficient
I discovered my "examine old functionality" sessions seemed to reveal more bugs, both in number and severity, compared to sessions focused on new and specific functionality. A few reasons for this, I think, is:
- I learned new things about the product/environment that inspired new tests and applying these to existing functionality often revealed bugs I had previously missed.
- I got some more distance to the functions making it easier to look at them as if I were a regular user, not the creator.
- I found my affection for the specific functionality (wanting my product to work) lowered over time. At some point it was like "I don't want this to work perfectly just to show to myself that I've gotten better as a programmer".
- Naturally I found it much easier to observe inconsistencies and problems with flows when looking at the product as a whole, not at a single function.
- I found it much more motivating to "play around" with the whole product and following my own energy seemed to make me a better tester than if I tried to "force myself" to test a specific part.
Lesson: Instead of spending all my time testing new functionality, spending a short session on new functionality and then add longer (or several shorter) general sessions every once in a while was a more effective way for me to spend my testing time.
Notice, I've observed a similar pattern at my job. Although, when being a tester in that context I also have to consider the fact that new functionality will be delivered to customers so I can't "save testing for later" as much and as a programmer I do prefer to work with code I've recently written. Still, scheduling time to just generally go over the whole product has often proved to be a valuable practice for me even when there's a lot of new functionality to look into!
Notice, I've observed a similar pattern at my job. Although, when being a tester in that context I also have to consider the fact that new functionality will be delivered to customers so I can't "save testing for later" as much and as a programmer I do prefer to work with code I've recently written. Still, scheduling time to just generally go over the whole product has often proved to be a valuable practice for me even when there's a lot of new functionality to look into!
Observation 4: Testing my own code really helped me think about testability
I think any company would benefit from pairing a programmer with a tester, not so much for the tester to get quick feedback on bugs or help with how some functionality is meant to work, but for the programmer to see what parts are hard to observe or control (read: test). Turning it around, as a tester it has been interesting to play with what's actually possible to achieve. Also, when I have the code in front of me I noticed I do a lot of things I would never had done on a "regular testing project", like disable functionality to enable better/quicker testing of back end stuff and making printouts I would normally "have to live without".
Testability, and especially ways to achieve/improve it, has been one of the most interesting lessons learned during this project, both from the perspective of me being a programmer and of me being a tester!
Lesson: Testers benefits from observing programmers/program as it helps understand what's possible and programmers benefit from observing testers/test as it helps understanding what makes a product testable (not to mention what strange ways users can "misuse" their creation .)
Observation 5: Programmers do a lot of testing better than testers
I found knowledge and control over the code invaluable, especially when testing "functional correctness":
Testability, and especially ways to achieve/improve it, has been one of the most interesting lessons learned during this project, both from the perspective of me being a programmer and of me being a tester!
Lesson: Testers benefits from observing programmers/program as it helps understand what's possible and programmers benefit from observing testers/test as it helps understanding what makes a product testable (not to mention what strange ways users can "misuse" their creation .)
Observation 5: Programmers do a lot of testing better than testers
I found knowledge and control over the code invaluable, especially when testing "functional correctness":
- I can manipulate code as I test
e.g. remove client side form validation to quicker test server side validation - I know about technical solutions affecting testing
e.g. due to a technical limitation we need to handle two seemingly identical requests differently - I know about the full implemented feature set, including "bonus functionality"
e.g. no one requested drag and drop but it came with the framework, still it has to work - I know about the code quality/problems
e.g. the user form handler is very messy, give it some extra testing attention
Communication can help much of this but it's hard as a programmer to know what a tester needs, it's hard as a tester to know what you need when you don't know what's available and it's hard to be fully aware of what you actually take into consideration when testing no matter if you're a programmer or a tester.
As the programmer you're of course likely to agree with your solution and test for the things you've already built protection against, simply because those are the problems/scenarios you're familiar with/have identified. Thus a second, third and forth opinion is important but don't underestimate the awesome bulk of unique knowledge the programmer possesses!
Lesson: Programmers testing their own code is extremely important, much of the testing they do will not be done by a tester and/or as efficiently.
Lesson: Pairing the programmer with a tester/test coach is probably awesome (in fact, I think it is).
Observation 6: Mixing up customer value and technical practicality
"That checkbox is not vertically aligned but I know it's really messy to fix so it's not important".
This kind of mind traps I found hard to fight against, even though I'm normally a tester.
The only real protection against it, that I found, was to force myself to solve any problem I found. In the beginning it was time consuming and demotivating (slowed the progress quite a bit) but soon I noticed it greatly improved my "base quality" as many of the problems were reoccurring until I dig in and learned how to solve them.
Lesson: Extremely low tolerance to faults really helps you improve! Of course I've had to let a few problems slide but I've always given them significant time, weighing the options/workarounds and tried to find consensus that a problem is impossible/not worth solving within the professional community. The stuff inconvenient to solve, technically, often provided me the big programmer "a-ha moments" when solved.
Lesson: Extremely low tolerance to faults really helps you improve! Of course I've had to let a few problems slide but I've always given them significant time, weighing the options/workarounds and tried to find consensus that a problem is impossible/not worth solving within the professional community. The stuff inconvenient to solve, technically, often provided me the big programmer "a-ha moments" when solved.
Summary
Forcing programmers to test their own code more will not make testers obsolete as well as adding more testers won't solve the problem where programmers aren't given enough time to test their code. Both are important and both add unique elements to testing!
I think programmers can severely affect their ability to test by how they behave/plan. For me, testing intensively while in development and trying to make "objective judgments", didn't work very well. Checklists and waiting with non-superficial testing until "mentally done", seemed to greatly improve my ability to test my own code.
So can programmers test their own code?
Yes, and they should but not alone for best result, that's my experience!
Agree?
So can programmers test their own code?
Yes, and they should but not alone for best result, that's my experience!
Agree?
No comments:
Post a Comment