Capture/Replay Testing

Slide 5 of 15

Capture/Replay Testing

In the mid-1980's automated testing tools emerged to automate the manual testing effort to improve the efficiency and quality of the target application. It was anticipated that the computer could do more tests of a program than a human could do manually and do them more reliably. These tools were initially fairly primitive and did not have advanced scripting language facilities.

Static Capture/Replay Tools (without scripting Language)

With these early tools, tests were performed manually and the inputs and outputs were captured in the background. During subsequent automated playback, the script repeated the same sequence of actions to apply the inputs and compare the actual responses to the captured results. Differences were reported as errors. The GUI menus, radio buttons, list boxes, and text were stored in the script.
With this approach the flexibility of changes to the GUI was limited. The scripts resulting from this method contained hard-coded values that must change if anything at all changed in the application. The costs associated with maintaining such scripts were astronomical, and unacceptable. These scripts were not reliable, even if the application had not changed, and often failed on replay (pop-up windows, messages, and other things can happen that did not happen when the test was recorded). If the tester made an error entering data, the test had to be rerecorded. If the application changed, the test had to be rerecorded. (from: Software testing and continuous quality improvement - William E. Lewis, Gunasekaran Veerapillai)
+ flexible testing
- expensive first execution
- ad-hoc coverage
- no coverage measurement
+ auto regression testing
- fragile tests break easily
See also COVER overview