A Guide to QA & Software Testing Best Practices

Elizabeth Diloreto
Dec 3, 2020

Simply put, delivering high-quality software would be virtually impossible without Quality Assurance. However, not all QA services and standards are created equal. With the right practices, QA ensures the technology you built does what you intended it to, and has precisely the right impact on users.

Our Lean Approach to QA and Software Testing

Our QA team embraces the principles of Lean Software Development, which is our way of ensuring we’re rapidly delivering value to our clients. Throughout each project, the team also sticks to the concept of Kaizen, a Japanese business term for continuous improvement. These principles help unify the team under the same goal. 

At a high level, QA services and standards entail manual QA, automation, website scanning, performance testing, security and penetration testing, and regression testing. In this guide, we aim to share our insights and learnings with the community.

Manual Software Testing and QA Best Practice 

We use the following guidelines to ensure our team is following the best practices for manual software testing. This section covers a summary of the activities performed by our manual testers within the scope of a single project, as it pertains to individual features or defects. 

Manual testing involves a variety of different testing techniques. The Manual Tester performs thorough testing against the acceptance criteria with the main browser and device determined by market share. QA discusses any error results with development. Upon completion of the main set of functional tests, The Manual Tester performs Look & Feel testing across the remaining supported browser/devices. Depending on the complexity of the feature, thorough functional testing may be required on additional browsers/devices. We complete testing as “white box” as possible, with access to the server, databases, backend integrations systems, monitoring, and more. White box testing focuses on testing internal structures instead of external functionality.

Issue Documentation

How issues are documented can vary based on the individual needs of the project. All of our reported issues include a short description of the issue with expected versus actual behavior with references to specific acceptance criteria. If applicable, we include affected browsers/devices, details found in the browser console, and any steps taken to debug the issue. 

Other important steps of issue documentation include blocking issues on a feature ticket, creating a bug ticket, and creating a known Issues list. In each documented ticket, we include steps to reproduce the issue, often with the visual aid of a screenshot or video. 

In our QA practice, everyone owns documentation, ensuring it’s consistently clear and complete. If a Tester has to run down information in order to test, it’s very likely that someone else may need to run down this information again later.  This is a crucial opportunity for our team to eliminate waste. 

Test Plans & Flow

QA leadership creates and maintains test plans tailored to the unique needs of each project. In the absence of a test plan deliverable, the project Confluence page is the repository for QA-specific knowledge related to the project.

Before testing, artifacts for testing are prepared in advance. Once a ticket is assigned to QA for testing. The Manual Tester reads through all tickets that are ready to test, including the ticket comments. They then review the project Confluence page for information, such as supported browsers/devices, environment, and credentials.

Testing Activities:

Sanity Testing

When new features are implemented, tests are performed on functionality peripheral to the new feature, as well as the other elements of critical application functionality. The scope of this testing can be broadened as needed for the application. The scope is intended to assist with identifying any regression issues that may have been introduced by the changes to the code. This approach is especially crucial when implementing hotfixes. A hotfix is a small piece of code developed to correct a major software bug or fault and is released as quickly as possible.

Smoke Testing

Our QA team maintains a list of basic and crucial functionality on each project to include in a smoke test. This set of tests is performed on any potential release candidate, environment deployment, or integration.

Decisions, Questions, and Clarifications

Any clarifications or decisions that affect the interpretation of acceptance criteria or involve changes to the acceptance criteria are discussed with the key decision-maker on the project. Results of these discussions are documented on the ticket. If the ticket is with QA, then it’s our responsibility to make sure that happens by entering ticket comments.


Using a suite of flexible, well-supported open-source tools to automate our client solutions adds the power of real-time CI pipeline tests. This enhances test coverage and finds issues faster, reducing costs and helping our clients sleep better at night. All this additional security is available without the pain of expensive software licenses. Since this is all on our servers, we do it without the hassle or expense of any other third party services. 

Gauge is one of the two main automation tools we use to help automate client solutions. Gauge supports test creation in high and low-level scripts and data-driven behavior. The tool allows our test designers to create simple but powerful test scripts. 

Website Scanning 

Whenever possible, QA utilizes automated website scanners to identify issues that aren’t necessarily exposed by a human eye or automation tests. These scans review all of the HTML elements of the site, providing insight in a matter of hours that would take days of human testing. 

Sortsite by Powermapper is a powerful website scanning tool that allows QA to quickly identify issues, without a significant manual effort. Sortsite is effective in scanning for errors and broken links, forbidden text and content issues, required text, accessibility, browser compatibility, privacy issues, search/SEO issues, web standards, and usability issues. We perform Sortsite scans in a pre-production environment within the regression phase, and then again post-production launch. 

Performance Testing

Testing application performance is an activity that takes place throughout the development process. Performance testing consists of both observations that are drawn from carefully planned functional testing, and analysis of system behavior when the entire system is pushed to its limits. 

Our functional testers are trained to take on the role of the appropriate user when performing any test. They evaluate the performance timing of the system response that follows each input compared to the expectation of the application’s user. Issues with the output timing or change of system state are raised immediately and are best addressed early in development.

Load Testing

We’ve developed a robust and 100% scalable solution to load testing, allowing us to simulate any desired load against an application. Supported by an industry-leading development team and the power of our DevOps solutions, we’re capable of identifying and resolving any issues with load tolerance well before they become a problem for users. Locust.io is where we design our load tests. Locust.io is an open-source solution scripted in Python that provides a web interface for testing, real-time reporting, and exportable test results. 

Security and Penetration Testing

Based on individual project needs, we are able to provide both security analysis and penetration testing. Our approach to each type of testing, and the tools that we use to ensure the security of the applications that we build. 

Ultimately, the security of an application is best ensured from the beginning of development. Our team regularly performs automated scan tests. This ensures that we maintain good coding practices throughout the software life cycle. For other types of security testing, activities are frequently scheduled in conjunction with regression prior to a major delivery.

Compliance and Conformance 

Compliance refers to how well the product adheres to mandatory laws or regulations. Compliance testing could be performed on an application to meet state or federal regulations such as encrypting data in motion and at rest. 

Conformance refers to voluntary adherence to institutional guidelines. The criteria can be somewhat vague at times, and the reasons for implementing them can vary. In some cases, adherence is strictly for the purposes of improving processes or quality. In other cases, it could be the result of corporate policies or guidelines for organizational membership. Conformance testing is more challenging, as few tools have been developed for the purposes of testing for these standards. 

We use OWASP ZAP, which is an open-source tool and testing staple, designed to scan and test sites for security Compliance. OWASP ZAP has exportable reports, for ease of reference.

Regression Testing

Regression testing includes a comprehensive re-test of all site functionality, look and feel, design responsiveness, and component integration. These tests are scheduled activities and require coordination between the development team and QA. In all cases, our QA Manager will communicate the regression plan to the team.

Timing & Frequency

Short term delivery hand-off is a case where development activities come to an end after delivery on a short term project or in the case of limited contract development support post-delivery. Regression testing is performed upon confirmation of code completion and prior to the actual hand-off. The schedule includes time for the tests to be performed, as well as time for development to resolve any critical issues found during regression. Our timing varies based on the roadmap delivery and the complexity of the application. 

Executing the Regression and Tracking Results

Our approach for performing the regression tests and tracking results depends on the tool used for the Test Suite, as well as the scope of the contract agreement for QA. In all cases, a ticket exists on the board to track the progress of the regression effort. All artifacts relevant to the regression effort are accessible through the ticket.

If the agreement includes the maintenance of a test suite, along with test cases as a deliverable, QA often creates a regression test from the test suite within the application where tests are being maintained. We recommend TestRail for this purpose, a robust and intuitive suite of tools that makes it easy to build test cases, run regression suites, and provide reporting on each test cycle. Tests are executed from the regression suite and documented according to the pass/fail criteria. 

If the agreement doesn’t include the maintenance of a test suite or test case documentation, regression represents a combination of test techniques in order to catch the greatest number of issues. Smoke testing is used to hit key functionality and verify components end to end. Experiential testing is used when domain knowledge of the application is exploited as much as possible to determine the best approach to testing. Risk-based testing is used when the test strategy is determined by evaluating the areas of greatest risk. Testing efforts are intensified around past bug clusters. Look & feel and exploratory testing is used when QA sweeps the application with supported browsers, exploring the application through different views. 


Our QA team coordinates with key decision-makers and the development team in order to determine the best approach to communicate and track issues for the regression. Most frequently, QA tracks issues found within a spreadsheet which is linked to the ticket and shared with the team, or from reports generated through TestRail. From these documents, the project lead can groom the issues and make quick decisions about priority. 


Quality assurance happens behind the scenes, rarely getting overt credit for the seamless digital solutions they test. If QA is done properly with meticulous attention to detail, it’s the reason beautiful software doesn’t fail. 

Spire Digital’s QA team delivers constant value by deploying best practices and finding innovative, efficient ways to deliver great software. While efficiency is essential, we also look for the best approach to mitigate risk for our clients, creating rock-solid disruptive solutions. We plan and execute every QA activity with careful consideration of the value that it offers to our clients. 

We are constantly seeking improvement. Our QA team is dedicated to discovering the best practices and most efficient ways to deliver great software. They’ve built a foundation for Spire Digital to be ranked second amongst the top software developers in the world, and it’s one of the reasons we’ve been in the digital transformation space for 22 years.

We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. Read our cookie policy here

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.