With Agile rapid development processes being adopted by companies like ourselves, we are releasing and updating our Learning Management System (LMS) continuously. With this development comes the overhead of ensuring we have a sound and solid testing process, as I cannot stress the importance of testing your software releases. The testing process helps ensure that your clients get a reliable product that is bug free and operating to requirements/design. For me the whole key to your software products reliability is how robust your testing process is.
Within our organisation we break our testing down into the following categories:
- Unit Testing
- Integration Testing
- User Acceptance Testing (UAT) – both scripted and exploratory
- Regression Testing
Beyond the above we may also do some “Performance/Stress Testing” on major releases to ensure the software is operating at its optimum level.
So what is the difference between these different testing categories? Below is a short synopsis on each one.
Unit Testing.
This testing is performed in a test client that is a close to productive as possible. The test conditions and results are not formally documented, but should cover the full extent of the functionality that has been created or changed. This testing aims to demonstrate that the individual programs or objects that have been created or changed perform to specification.
To ensure that the testing is soundly based, the test clients must be refreshed with the customising and data from the production system.
Tests must include positive and negative data boundary values, warning and error messages, and all inputs and outputs. Evidence of testing performed should be retained until after release to production, hence maintaining an audit trail of all unit testing that was done is essential and this can be done by maintaining a “test script”. These individual unit tests will also be recorded on a control spreadsheet, listing all the functionality tested and results. The spreadsheet would offer the audit trail in terms of testing undertaken.
A “unit” is defined as a LMS transaction, such as: a report, a User Exit, a validation routine or any other bounded functions being developed/changed.
Integration Testing.
Once the functionality has been proved in isolation (i.e. unit tested(, it will be further tested to demonstrate that it operates as specified in conjunction with other functions, existing or new. This is performed again in the test client that is configured to be as similar to the production client as possible and where possible it contains a populated database. It should concentrate on the operability of the function:
- That it can be accessed via the menus where applicable
- That it is consistent
- That translations are present where applicable
- That it is robust (that is to say it can be performed repeatedly)
- That interfaces with other functions or external entities perform to specification
For reports or other functions that are data dependent, volume tests should be considered. All data boundaries such as page breaks, maximum numbers of records or field overflows should be exercised.
User Acceptance Testing (UAT).
UAT is the activity that validates the design of the system and verifies that it performs to business specification agreed with or expected by the “Users”. The purpose of the testing is to:
- Validate the delivered system against the user requirements
- Exercise the functionality in a business context
- Identify any procedural shortfalls
- Dry run any implementation requirements (such as conversions)
The data for the tests are stored in Excel spreadsheets, which will be input manually and/or automatically, and the results checked. If a discrepancy arises, the script will be checked to ensure it is correct and has been precisely executed and if so, a defect will be raised on the issues spreadsheet or document.
Regression Testing.
Regression Testing is that testing which is performed to determine whether the changes have introduced errors into (or regressed) other parts of the system that are already in production. Regression Testing is essential for releases of substantial functionality because changes and fault fixes often introduce adverse side effects.
Regression Tests are based on the manual test cases applied during UAT for the original release. A full functional regression test (i.e. a rerun of the whole test plan) is impractical unless all tests are automated. However, the cost of building and maintaining a complete automated regression suite is likely to be prohibitive so a sub-set of tests will be identified that will provide sufficient coverage. A proportion of regression test scripts will not be suitable for automation will always be required. There are a number of test automation tools our in the marketplace, we use MTM (Microsoft Test Manager) as this allows us to carry out a regression test on all releases before we push to production and also on the production system.
When structured into functional areas it will be possible to identify areas of likely impact that will allow limited or component regression testing to be performed for less significant functional releases.
Performance/Stress Testing.
This testing will be performed where it has been identified that application or system performance may be lower than specified, network limitations exist or changes may lower existing performance headroom.
Performance testing is stage and environment independent and should be undertaken at the appropriate juncture of the development or testing. Specialist knowledge will usually be required for measurement and tuning.
Conclusion.
So as you can see above, with each release we have to have an extensive testing regime to ensure product quality is maintained. We also ensure these testing processes are implemented as part of our ISO9001 system.
No responses yet