In a previous blog SQA2 posed the question, when it comes to software development, what should QA test? Our answer was that you need to essentially test everything, from the components, performance, features and functionality of the product itself to the deployment, the development process and all of the random things covered in ad hoc testing.
In this blog we’d like to address the obvious follow-up question: How do you ensure that all of the things that should get tested actually do get tested?
Here’s what we recommend…
Start by having clear requirements on your tickets
The development process starts with the tickets. Ensure that each ticket clearly explains what is being created, what it needs to do and how to tell if the software successfully accomplishes these goals. If, for example, the ticket is for a performance improvement story, include the desired metrics, such as “performance should increase by 10%.” If there are special scenarios that need to be accounted for, these need to be clearly spelled out on the ticket as well.
Create written test cases
A robust QA testing system should be based on written test cases. Every requirement, function, criteria, etc. should be covered by a written test case. These test cases form your base, and creating them is probably the most important step in ensuring that things do indeed get tested.
As you are creating the test cases, go line-by-line through the requirements listed on the associated ticket, to ensure everything listed is covered. If the ticket does not mention something obvious, such as if it’s a performance story and it does not provide a metric regarding the desired level of performance improvement, test the performance anyway. Also include tests for all the relevant things mentioned in our “What should QA test” article.
If you’re worried about missing things and want to take the “gut check” out of QA test case identification you might also consider using Behavior Based Testing (BBT) methodologies (see the SQA2 blog on this topic for more information).
Have test case reviews
Ideally your test cases will be reviewed by both the developers and the business, to ensure that everything they seek is being tested, and that QA clearly understood the ticket. Was anything “lost in translation”? Was anything overlooked?
Use a tracking system for the testing process
What often happens is that QA teams execute tests and then store the results in an ad hoc manner, such as in an Excel file, Google doc or email. But when there is no standardized, centralized location in which to store test cases and results, there’s no way to figure out what has been tested and what the results were. In this situation the people in charge of the overall project have no idea what’s going on, and no simple way to find out.
This is why we strongly recommend using a tracking system for the testing process. An Application Lifecycle Management (ALM) solution such as Jira, Rally or TestRail is ideal for this. The ALM gives you a centralized place to store test cases, test results and documents associated with the tests or testing process. Using a tracking system creates visibility and accountability, which in turn brings tremendous peace of mind to those leading the project.
Document everything, and keep the documentation up-to-date
Whether you are using an ALM or some other tracking system, everything should be documented, and all documentation should be stored in a central location that’s available to the entire team. As more tickets related to a given feature get developed, these should all be tied together in the documentation system. As things change or get updated, the documentation needs to be updated as well.
For example, say the business sends the team an Excel spreadsheet and says, “please make this into a web report.” Then, later on, they come back with some changes. You want to be able to pull up documentation of the original request as well as all subsequent iterations of that report.
Or say you’re working with a number of interconnected databases on which you are doing a lot of ETL (Extract / Transform / Load) processes. If something changes on Database A, you have to make sure that this doesn’t negatively impact Databases B, C or D. Having a clear diagram in the documentation of these databases and their relationships can be invaluable as you’re writing test cases.
Communicate the test results
It doesn’t do anyone any good if QA is testing things in a vacuum and then keeping the results to themselves! Project managers and team leaders cannot ensure things are getting tested if they never see any test results.
Once a test has been executed the test results should be documented, saved to the ALM or whatever centralized documentation system is being used, and shared with the appropriate team members. This reporting should answer the following questions:
Traditionally these test result reports were sent out via email. More recently we’ve seen messages going out through other communication systems, including automated alerts sent by the ALM.
Build checking test results into the system
A great way to ensure things get tested is to make it a requirement of your development system. For example, you can set things up so that in order to move forward with a deployment someone needs to sign off on the fact that all tests have been passed. This can be integrated with the deployment cycle in a continuous integration environment.
Another round-about way to build checking the test results into the system is to require project owners to sign off on new features before they’re deployed to production. In this situation the development team would use Review meetings to demo the new features. Since no one wants to demo something for business that hasn’t already been successfully tested by QA, this also helps serve as a double-check that tests are being done.
Conclusion
In sum, to ensure that everything that should be tested in your software development cycle does get tested, you need to start with written test cases (based on the requirements presented in the associated ticket), and then document, track and communicate everything that QA does.
Source: SQASquared