Showing posts with label Process. Show all posts
Showing posts with label Process. Show all posts

September 30, 2008

The Antonym of Testing


"... one usually encounters a definition such as, 'Testing is the process of confirming that a program is correct. It is the demonstration that errors are not present.' The main trouble with this definition is that it is totally wrong; in fact, it almost defines the antonym of testing."

- Glenford Myers,

Software Reliability: Principles & Practices, 1976

People keep telling me that testing is a validation activity -- that the purpose of testing is to validate that the software meets all the specifications, has no errors, meets performance SLAs, meets expectations of anonymous users, or some other lofty goal.

I read about testing processes designed to validate software. I use testing tools built to support validation. I listen to service companies pitch testing services to validate software. I read about testing metrics built on the assertion that software systems can be proved correct. I attend testing presentations explaining the presenters' best practices for validation.

The trouble is that we cannot prove software correct. We cannot prove the absence of bugs. We cannot test every possible state and input. We cannot evaluate every possible output. We cannot fully understand the desires of stakeholders. We cannot prove that customers will be happy. We cannot prove that a software product will solve the problems it was built to solve. If all this were possible, I suspect insurance companies would find a way to make a profit selling software quality insurance.

"If you think you can fully test a program without testing its response to every possible input, fine. Give us a list of your test cases. We can write a program that will pass all your tests but still fail spectacularly on an input you missed. If we can do this deliberately, our contention is that we or other programmers can do it accidentally."

- Cem Kaner, Jack Falk, and Hung Quoc Nguyen,
Testing Computer Software, Second Edition, 1999

Now, thirty-two years since Glenford Myers called testing to prove correctness the opposite of testing, we're surrounded by testing practices and tools based on proving correctness. The myth of proving correctness is alive and well.

Activities designed to try to prove correctness are the antonym of testing.

So if testing is not validation, what is testing? Testing is investigation; and communicating useful information about quality to decision makers.

"Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system."

- James Bach,
James Bach on Risk-Based Testing, STQE Magazine, Nov 1999


"Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information."

- Cem Kaner, James Bach, Bret Pettichord,
Lessons Learned In Software Testing: A Context-Driven Approach, 2002


"A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions."

- Brett Pettichord,
Don't Become the Quality Police, StickyMinds.com, 2002


Once we admit that we cannot prove the software correct, we can refocus our efforts on finding useful quality-related information. Instead of pretending to assure quality or validate correctness, we can gather and communicate useful information. Investigate the software. Find information about threats to the quality of the systems under investigation. Communicate that information in terms that matter to stakeholders. Help managers make informed decisions.

August 8, 2007

Things We Know

 Charles Maxwell: Shark attacks helicopter I find it at work. I find it in online forums. I find it in books. I find it in papers. I find it in blogs. I find it at conferences.

I hear it from experts. I hear it from freshers. I hear it from friends. I hear it from managers. I sometimes even hear it come out of my own mouth.

It influences testers. It influences developers. It influences managers that influence testers and developers. It impacts customers.

It wastes time. It wastes money. It frustrates developers. It confuses executives. It demeans testers. It decreases quality in the name of improvement.

It permeates the practice of developing and testing software.



What is this ubiquitous it?




It is testing folklore.




It ain’t so much the things we don’t know that gets us in trouble. It’s the things we know that ain’t so.
- Artemus Ward



Here are some examples I pulled off the top of my head:
  • There are best practices
  • Tool vendors know those best practices
  • The right tools make good testing
  • Testers are the enemies of developers
  • Automated unit testing is the only testing we need
  • Written requirements are needed for testing
  • It is possible to document unambiguous requirements
  • Repeatability is maturity
  • Tests can be completely designed and scripted before execution
  • Testing is simple if guided by the right process
  • Quality can be tested into a product
  • Good manual testing can be replaced by automation
  • Automation is only good for regression testing
  • Test case counts are a good measure of test status
  • All web pages should load in under 6 seconds
  • Testers need to have development skills
  • Good testing can be pre-scripted to be executed by anyone that can follow directions
  • Boundaries are easy to identify
  • Most bugs occur at boundaries
  • Testing is easily outsourced to unintelligent people
  • Testing is easily outsourced to tools
  • Increased testing effort improves quality
What folklore do you encounter?

It is time to unlearn those things we know that ain't so. Challenge the folklore. Ask questions.

  • Who says so?
  • How do they know?
  • What are they missing?
  • Does it apply to my context?
  • Does it make sense?

Maybe its time to call Mythbusters.

May 5, 2007

Not Gonna Bow

Individuals and interactions
over processes and tools

Working software
over comprehensive documentation

Customer collaboration
over contract negotiation

Responding to change
over following a plan

- Agile Manifesto


Neary 400 years ago, Francis Bacon challenged the status quo in scientific thought in “The New Organon”. James Bach recently pointed out some interesting quotes from this work that apply to software testers. I agree.

Bacon argued that placing our preconceived beliefs over what we observe causes great harm. He went so far as to describe these harmful preconceived notions as “idols”. Bacon put these idols into four categories:

  • Idols of the Tribe: Errors common to mankind.

  • Idols of the Cave: Errors specific to each individual’s education and experience.

  • Idols of the Market Place: Errors formed through association with others — often due to misunderstanding others.

  • Idols of the Theater: Errors formed from dogma (institutionalized doctrine) and flawed demonstrations.


All of these exist in software testing. As testers, we should be questioning these “idols”, not worshiping them. Sometimes questioning them may prove them right.

Bacon did not ask anyone to abandon their beliefs without cause. Instead he asks that we not make them idols capable of leading us to ignore what would be obvious if we weren’t looking through the distorted mirror of our idols.

A modern day simplification of Bacon’s arguments may be the Agile Manifesto. We should not let our idols of process, documentation, contracts, and plans prevent us from accomplishing the desired goal. Process, documentation, contracts, and plans are only good in as much as they help. They should not prevent us from seeking improvement.

In some ways I believe that the promotion of testing folklore is the result of an industry-wide desire to show that we are mature — as mature as the engineering of physical products. I believe that eagerness to demonstrate maturity helps lead to the implementation of bad processes and cerfifications. Ironically, enforced process (see the bottom of the FSOP cycle) works best for the immature and gives the impression that anyone that can follow the process can test software.

Don't get me wrong. Process and documentation are good things that help even the smartest people when appropriately applied.

"The only thing that interferes with my learning is my
education." - Albert Einstein


We need to seek continual improvement. It is sad that process and certification often become idols that overshadow the real goals.

February 12, 2007

Best Practices Aren’t

The first two of the Seven Basic Principles of the Context-Driven School of software testing are:

1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.


As a former quality school gatekeeper, I understand the value of standards – in both products and processes. However, I am concerned by the current “best practices” trends in software development and testing. The rigidity that we demand in product standards can hurt in process standards. Even the CMM (which is often viewed as a rigid process) has “Optimizing” as the highest level of maturity. A mature process includes continuous learning and adjustment of the process. No process should lock us into one way of doing anything.

Nearly 100 years ago, the industrial efficiency pioneer Frederick Taylor wrote “among the various methods and implements used in each element of each trade there is always one method and one implement which is quicker and better than any of the rest”.

I do not disagree that there may be a best practice for a specific task in a specific context. Taylor broke down existing methods and implements (tools) into small pieces and scientifically evaluated them in search of areas for improvement. The problem is that today's best practices are often applied as one-size-fits-all processes. The best practice for one situation is not necessarily the best for all other contexts. And a "best practice" today may no longer be the best practice tomorrow. This is actually the opposite of what Taylor did. Consultants and tool vendors have discovered that there is money to be made taking "best practices" out of one context and applying to all other contexts. It is harder, and likely less profitable, for the "experts" to seek out the best practices for a specific context. Taylor sought out and applied best practices to small elements. Many of today's "best practices" are applied at a universal level.

I am amazed by what appears to be widespread acceptance of “best practices” by software testers. As testers, it is our job to question. We make a living questioning software. We need to continually do the same for practices. Test your practices.

When presented with a best practice, consider contexts in which the practice is not the best. The broader the scope of the best practice, the more situations it is unlikely to fit. Don’t limit your toolbox to a single practice or set of practices. Be flexible enough to adjust your processes as the context demands. Treat process as an example and apply it where it fits and be willing to deviate from the process -- or even apply an entirely different process -- if it does not fit the context.

No process should replace human intelligence. Let process guide you when it applies. Don’t let a process make decisions for you.

Seek out continuous improvement. Don't let process become a rut.

Process is best used as a map; not as an auto-pilot.