Posted by Ben Simo
Performance and load testing is often viewed as something that has to be done late in the development cycle with a goal of validating that performance meets predefined requirements. The problem with this is that fixing performance problems can require major changes to the architecture of a system. When we do performance testing last, it is often too late or too expensive to fix anything.
The truth is that performance testing does not need to happen last. Load test scripting is often easier if we wait until the end, but should we sacrifice quality just to make testing easier?
Scott Barber divides performance testing requirements and goals into the following three categories:
- Scalability -- extremely technical
- Stability -- mostly technical
- Speed -- fuzzy: some technical, other usability
Scott says that hard measurable requirements can usually be defined for scalability and stability; however, meeting technical speed requirements does not ensure happy users. I often hear (and read) it said that one must have test criteria defined before performance testing can start. I disagree. When requirements are difficult to quantify, it is often better to do some investigative testing to collect information instead of validating the system against predefined requirements.Speed is where things get fuzzy. Some speed requirements are quite definable, quantifiable and technical; others are not.
- Scott Barber
In additional to the three requirements categories, Scott argues that there are two different classifications of performance tests.
- Investigation -- collect information that may assist in measuring or improving the quality of a system
- Validation -- compare a system to predefined expectations
Traditional performance testing is treated as a validation effort with technical requirements. It is often said that a complete working system is required before testing can begin. Extensive up-front design is common. Tests are executed just before release and problems are fixed after a release. A couple years ago, Neill McCarthy asked attendees at his STAR West presentation if these really are axioms. When we consider the potential of investigative testing, these assumptions of traditional performance testing quickly dissolve.Investigate performance early
Validate performance last
Neill recommended that we apply the Agile Manifesto to early performance testing. How can we apply agile principles to investigative load testing?
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Model user behavior as early as possible; and model often. A working application is not needed to model user behavior. Revise the model as the application and expected use change. Script simple tests based on the model. Be prepared to throw away scripts if the application changes.
Conduct exploratory performance tests. Apply exploratory testing techniques to performance testing: simultaneous learning, test design, and test execution. Perform "what if" tests to see what happens if users behave in a certain way. Adapt your scripts based on what you learn from each execution.
Evaluate each build on some key user scenarios. Create a baseline test that contains some key user scenarios that can be run with each build. A common baseline in the midst of exploratory and investigative tests provides supports comparison of builds.
Investigative agile performance testing can increase our confidence in the systems we test. Exploratory tests allow us to find important problems early. Testing throughout the development lifecycle makes it easier to measure the impact of code changes on performance.
References
- McCarthy, Neill; Performance testing in early development iterations, STAR West presentation, Nov 2005
- Barber, Scott; Investigation vs. Validation, Software Test & Performance, Nov 2005
- Barber, Scott; Two Kinds Of Performance Requirements, Software Test & Performance, Dec 2005
3 Comments:
June 12, 2007-
cloosley wrote:
-
-
June 12, 2007
-
Ben Simo wrote:
-
-
March 02, 2008
-
Anonymous wrote:
-
-
Ben,
This is a very good analysis; I am pleased to see you relating performance testing to other aspects of development, because -- if we generalize to the level of listing the customer requirements for usable software -- performance is simply a usability feature like any other.
And if development teams can just be persuaded to treat performance goals like any other functional or usability characteristic that must be evaluated during the development process, then agile or rapid application development methodologies actually enable more effective performance testing than any "waterfall" methodology that defers performance testing until late in the process.
So I believe the real problem lies in developers' perceptions of performance as "someone else's problem" and "something that can be tuned later," not in the development methodology itself.
Ten years ago I discussed this issue at some length in my book, particularly in Chapter 4. Since then (as far as I can tell), only the terminology has changed; people's mindsets remain the same.
I will try to write a post about this when I have a bit more time, and re-publish the core of my argument and the associated illustrations.
--Chris
So I believe the real problem lies in developers' perceptions of performance as "someone else's problem" and "something that can be tuned later," not in the development methodology itself.
I agree. As with any other aspect of usability, good performance needs to be built in from the start. We can tune later, but tuning requires that we have a good foundation to make better.
In line with my recent plea for better error messages, I have found that adding simple performance (i.e., transaction time) logging to the code can provide very useful information with little effort. Collecting performance data throughout development gives us information as soon as possible.
Ben a good article and nice to see that I have had some positive influence; even if i was slow to notice.
I have since been working on additional projects and have had some further success on developers working with non functional testing.
I am almost at the point where I believe i have enough experience to blog, write or speak at conference on the subject with more details and some interesting patterns.
Post a Comment