QA process works towards the following goals:
- Provide a qualitative metric of product quality (number of open and fixed/verified bugs)
- Provide qualitative metric of functional requirements fulfillment (implemented and verified test cases or user stories against a target total)
- Ensure stable product quality over time (regression testing)
QA process is based on product requirements and works with artifacts (labeled builds) produced by release management process. Below you will find some key points for the QA process to meet its goals:
- QA activities are applied to each official [labeled] build (the set of applied tests might differ, though)
- Product quality is evaluated against a set of test-cases. Test cases are usually derived into groups (by functional area; by priority) – this allows more flexibility in applying test efforts.
- Each produced [labeled] build should have an associated test report which specifies the tests being executed and a result of execution (positive/negative).
- Each build that’s publicly deployed (i.e. uat/staging/production) should be tested at least against a short-list of critical test cases (usually referred to as ‘smoke test’). If a build does not pass this ‘smoke test’ it can’t (won’t) be deployed anywhere.
- When adding new functionality or updating existing features, a list of test cases is updated to reflect these changes (new features usually result in new test cases being added to this list).
- Having a consistent track of test results provides an immediate insight on:
- Development progress (new test cases mean new functionality being added)
- Stability of each build we produced (number of bugs and their priority) – which greatly simplifies a task of choosing a ‘good enough’ build for urgent delivery
- Amount of work that lies on our plate (blocked test cases + open bugs)
When maintaining automated [selenium] tests with iterative development process, the practice is usually as follows:
- Assuming we have a sufficient test coverage for the previous iteration
- During current iteration, QA team does the following:
- Preparing a list of test cases to cover new functionality (during current sprint there test cases are usually executed manually)
- Implementing [automated] tests for test cases that were implemented/delivered during previous sprint
- [optional] backing up critical bugs by automated tests in order to ensure proper regression testing
The reason for automated tests to be one iteration behind development is that in order to develop them, the functionality has to be in place, which is not the case during ‘current’ sprint. With this approach to testing (i.e. using automates tests) a sign of a healthy project is an increase in number of [passing] automated tests after each iteration.