The test automation engineers at TestResults.io have seen a variety of different styles of test cases. From datasheet-tables with individual steps over PowerPoint presentations to text documents mixed with screenshots. We have seen it all. In the end, content matters, the form is secondary. As part of the automation process at TestResults.io we bring the test cases always in a structured form. Which is then also used to generate the test reports for the automated executions.
With this in mind we suggest that you write the test case already in a form that already fits a structured approach, especially if you are just starting to write them to be automated with TestResults.io. We prepared a template for a test case document that helps you to document your tests.
- Identifier: Every test case requires a unique identifier. If you already use a Test Case Management Tool, ideally use the same identifier also for TestResults.io. If your software has clearly defined modules without inter-dependencies, consider using a prefix to identify the module (e.g. KB001)
- Title: Give the test case a short but descriptive title, so that it is clear what the topic of the test case is.
- Description: We recommend summarizing the goal of the test in the description field. What are the main functionalities that are being tested? If you can trace the test case to a specification or requirement, list its identifier here as well for future reference.
- Preconditions: Indicate the state of the system under test (and potential interface systems) that is required to start the test. The precondition is enforced when executing the test case (e.g. a login action is done), but it should be simple and with a low risk of failure. If the feature you need to test requires a complicated setup, it’s better to add this setup as normal steps. Ideally the precondition is something that is the same for many test cases.
- Test Steps: The test instructions are separated into test steps. Each step consists of test input and expected result. The test step is the smallest unit for which a result will be reported (Pass/Fail, duration).
You can add scenario titles to give a clearer structure for longer test cases that are separated in different scenarios (e.g. positive and negative tests). These scenario titles will also be included in the generated test report.
- Test Input: Describe the action that need to be performed on the system under test. You don’t need to describe every click, but the required actions need to be unambiguous, especially when there is more than one way to do something.
If a lot of data needs to be entered to the system under test, it might make sense to refer to a supporting file containing that data.
If a generated value from the system under test needs to be reported or will be needed later in the test case, indicate it in the test input. Similarly, if some output from the system under test should be saved as artifact of the test execution or additional screenshots are required during execution this can be added to the test input.
- Expected Result: How should the system react to the test input? This is what will determine the result (passed or failed) of the step. For every “verify” instruction there should be an expected result.
For setup steps that prepare the system for the feature that is being tested, the expected result can be left empty. However, it often makes sense to also verify such preparation steps with an expected result to make troubleshooting easier (e.g. maybe a failure later in the test case was caused by an incorrect behavior in a preparation step).
Indicate a clear tolerance if applicable, e.g. when a timestamp or a numeric value needs to be verified.
- Cleanup: If data was modified in a remote system that is not part of the test environment or a local test environment was modified, the data should be cleaned up and the system should be brought to a known state so that a next test case can be executed with that system. The cleanup step is executed even in case of a failure where the test case was not fully executed. Temporary data that is created and later removed during the test case should also be considered for cleanup in case of a failure.