Posted by Ben Simo
Methods I have used for automating “black box” software testing…
I have approached test automation in a number of different ways over the past 15 years. Some have worked well and others have not. Most have worked when applied in the appropriate context. Many would be inappropriate for contexts other than that in which they were successful.
Below is a list of methods I’ve tried in the general order that I first implemented them.
Notice that I did not start out with the record-playback test automation that is demonstrated by tool vendors. The first test automation tool I used professionally was the DOS version of Word Perfect. (Yes, a Word Processor as a test tool. Right now, Excel is probably the the one tool I find most useful.) Word Perfect had a great macro language that could be used for all kinds of automated data manipulation. I then moved to Pascal and C compilers. I even used a pre-HTML hyper-link system called First Class to create front ends for integrated computer-assisted testing systems.
I had been automating tests for many years before I saw my first commercial GUI test automation tool. My first reaction to such tools was something like: "Cool. A scripting language that can easily interact with the user interfaces of other programs."
I have approached test automation as software development since the beginning. I've seen (and helped recover from) a number of failed test automation efforts that were implemented using the guidelines (dare I say "Best Practices"?) of the tools' vendors. I had successfully implemented model-based testing solutions before I knew of keyword-driven testing (as a package by that name). I am currently using model-based test automation for most GUI test automation: including release acceptance and regression testing. I also use computer-assisted testing tools help generate test data and model applications for MBT.
I've rambled on long enough. Here's my list of methods I've applied in automating "black box" software testing. What methods have worked for you?
Computer-assisted Testing
· How It Works: Manual testers use software tools to assist them with testing testing. Specific tasks in the manual testing process are automated to improve consistency or speed.
· Pros: Tedious or difficult tasks can be given to the computer while a thinking human being is engaged throughout most of the process. A little coding effort greatly benefits testers. A thinking human being is involved throughout most of the testing process.
· Cons: A human being is involved throughout most of the testing process.
Static Scripted Testing
· How It Works: The test system steps through an application in a pre-defined order, validating a small number of pre-defined requirements. Every time a static test is repeated, it performs the same actions in the same order. This is the type of test created using the record and playback features in most test automation tools.
· Pros: Tests are easy to create for specific features and to retest known problems. Non-programmers can usually record and replay manual testing steps.
· Cons: Specific test cases need to be developed, automated, and maintained. Regular maintenance is required because most automated test tools are not able to adjust for minor application changes that may not even be noticed by a human tester. Test scripts can quickly become complex and may even require a complete redesigned each time an application changes. Tests only retrace steps that have already been performed manually. Tests may miss problems that are only evident when actions are taken (or not taken) in a specific order. Recovery from failure can be difficult: a single failure can easily prevent testing of other parts of the application under test.
Wild (or Unmanaged) Monkey Testing
· How It Works: The automated test system simulates a monkey banging on the keyboard by randomly generating input (key-presses; and mouse moves, clicks, drags, and drops) without knowledge of available input options. Activity is logged, and major malfunctions such as program crashes, system crashes, and server/page not found errors are detected and reported.
· Pros: Tests are easy to create, require little maintenance, and given time, can stumble into major defects that may be missed following pre-defined test procedures.
· Cons: The monkey is not able to detect whether or not the software is functioning properly. It can only detect major malfunctions. Reviewing logs to determine just what the monkey did to stumble into a defect can be time consuming.
Trained (or Managed) Monkey Testing
· How It Works: The automated test system detects available options displayed to the user and randomly enters data and presses buttons that apply to the detected state of the application. · Pros: Tests are relatively easy to create, require little maintenance, and easily find catastrophic software problems. May find errors more quickly than an unsupervised monkey test.
· Cons: Although a trained monkey is somewhat selective in performing actions, it also knows nothing (or very little) about the expected behavior of the application and can only detect defects that result in major application failures.
Tandem Monkey Testing
· How It Works: The automated test system performs trained monkey tests, in tandem, in two versions of an application: one performing an action after the other. The test tool compares the results of each action and reports differences.
· Pros: Specific test cases are not required. Tests are relatively easy to create, require little maintenance, and easily identify differences between two versions of an application.
· Cons: Manual review of differences can be time consuming. Due to the requirement of running two versions of a application at the same time, this type of testing is usually only suited for testing through web browsers and terminal emulators. Both versions of the application under test must be using the same data – unless the data is the subject of the test.
Data-Reading Scripted Testing
· How It Works: The test system steps through an application using pre-defined procedures with a variety of pre-defined input data. Each time an action is executed, the same procedures are followed; however, the input data changes.
· Pros: Tests are easy to create for specific features and to retest known problems. Recorded manual tests can be parameterized to create data-reading static tests. Performing the same test with a variety of input data can identify data-related defects that may be missed by tests that always use the same data.
· Cons: All the development and maintenance problems associated with pure static scripted tests still exist with most data-reading tests.
Model-Based Testing
· How It Works: Model-based testing is an approach in which the behavior of an application is described in terms of actions that change the state of the system. The test system can then dynamically create test cases by traversing the model and comparing results of each action to the action’s expected result state.
· Pros: Relatively easy to create and maintain. Models can be as simple or complex as desired. Models can be easily expanded to test additional functionality. There is no need to create specific test cases because the test system can generate endless tests from what is described in the model. Maintaining a model is usually easier than managing test cases (especially when an application changes often). Machine-generated “exploratory” testing is likely to find software defects that will be missed by traditional automation that simply repeats steps that have already been performed manually. Human testers can focus on bigger issues that require an intelligent thinker during execution. Model-based automation can also provide information to human testers to help direct manual testing.
· Cons: It requires a change in thinking. This is not how we used to creating tests. Model-based test automation tools are not readily available.
Keyword-Driven Testing
· How It Works: Test design and implementation are separated. Use case components are assigned keywords. Keywords are linked to create tests procedures. Small components are automated for each keyword process.
· Pros: Automation maintenance is simplified. Coding skills are not required to create tests from existing components. Small reusable components are easier to manage than long recorded scripts.
· Cons: Test cases still need to be defined. Managing the process can become as time consuming as automating with static scripts. Tools to manage the process are expensive. Cascading bugs can stop automation in its tracks. The same steps are repeated each time a test is executed. (Repeatability is not all its cracked up to be.)