Posted by Ben Simo
"A Requirement is a quality or condition that matters to someone who matters."Automated tests are usually coded to perform validations against written requirements. Computers are deterministic in that they need specific instructions regarding what to test, what counts as a passed test, and what counts as a failed test. This is one of the weaknesses of test automation. Many requirements exist beyond the hard written requirements. Test automation can be a great tool for measuring hard requirements. For example, automation can be great for validating mathematical calculations.
- Cem Kaner, James Bach, and Bret Pettichord
Lessons Learned In Software Testing
Test automation can also be a great tool for testing the fuzzy requirements through the use of heuristics. In addition to coding validations (oracles) for hard facts, automation can be used to report things that require human attention. Automation can report information to help direct the attention of human testers.
"Only weak bugs have logic to them ... Subtle bugs have no definable pattern -- they are wild cards."
- Boris Beizer
Software Testing Techniques
A home-grown (by someone else -- later enhanced by me) test automation tool I used many years ago was built to report "pass", "fail" or "inconclusive" for each test it performed. It had been identified that there were many cases where human judgement and/or investigation was required to determine if something really passed or failed. In some cases, it was just not economical to code a validation for something that humans process better than machines. Therefore, instead of trying to make automation do it all, create automation that does what computers do best and let thinking human testers pick up where the computers stop.
I once automated data validation for a large pricing database. It was suspected that there were numerous errors in this database. Finding and fixing possible errors in millions of records was a daunting task. Instead of creating complex calculations to try to completely automate the validation, I coded simple heuristic rules. When these rules failed, then the "failure" was reported to human testers and data editors for investigation. These rules were things like:
- Suggested retail price is greater than wholesale price
- Current price is within 10% of the previous price
- Generic equivalent price is less than the name-brand price
Were these test oracles always true? No. In some cases it was correct for the data to fail the above tests. However, the automation helped direct the attention of testers. These heuristic validations also helped expose unexpected patterns and led to finding bugs in the software that processed and formatted the data.
Instead of creating complex test automation tools, Harry Robinson suggests that we build "The Simplest Thing That Could Possible Find A Bug". Sometimes this means that we code heuristic validations instead of complex validations that report results with absolute certainty. Let the computers report things that a human tester should investigate. Instead of "inconclusive", Harry uses the term "suspicious". I like that.
The next time you automate testing, in addition to thinking of things that computers can report as "passed" or "failed", think of things that it might be able to report as "suspicious".
1 Comment:
May 28, 2007-
Madhukar Jain wrote:
-
-
Nice article SAM.....
Post a Comment