Principles of a Quality Assurance Process
Last week, we explored four key benefits of having a quality assurance workflow integrated in your development. But what does that process actually look like, and what strategies can be employed to have a successful QA?
Here at PK, the vital core to every quality assurance check is the QA documentation (which is almost exclusively a Google Doc). It contains several pieces of vital information:
How to access the database (host IP information, etc.)
Relevant login credentials
The purpose of the check
List of features to check
Instructions on how to use the function, if necessary
Anticipated results
Results and feedback section
Generally speaking, our in-house quality assurance is completed by a combination of non-billable units and junior developers. We’ve found that if the developer who created the functionality performs the check, they’re too close to the development process and won’t catch as much. Having less-familiar eyes on the database allows us to catch more potential problems. And when we use junior developers, it helps them to become accustomed to a database they may work on in the future.
We aim to have separate development and production environments for client databases, if at all possible. Several clients use FileMaker Go on mobile devices. A couple use Mirror Sync while a handful have WebDirect deployments. Because of this, it’s best to start your QA document by identifying the database(s) being tested and how to connect to them. Additionally, relevant login credentials for test accounts are vitally important—particularly when working on features that are only supposed to be available to select permission levels
From here, define the purpose of the check and features needing review. It’s helpful to determine this scope from the beginning as it will ensure everyone is on the same page as to the desired end goal. Similarly, it allows whoever is testing to be mindful of the process to accomplish the necessary tasks, and they may in turn make recommendations to improve the user experience accomplishing that goal.
If necessary, provide instructions as to how to complete the task. For large databases, someone unfamiliar with its features and functionalities probably won’t know where everything is located. Specifying where to find it and how to use it for the uninformed will make the process go faster. However, the caveat to this is a brand new database and/or testing the ease-of-use with a new interface. In those instances, no or limited instruction can be preferable as it can demonstrate how easy or difficult it may be to learn the structure from scratch.
What the anticipated results are should be clearly defined, whether it’s what does happen or what doesn’t happen—or a combination of both. Directly after this, leave room for the results and feedback from the reviewer for each functionality being tested. If multiple people are involved, we use initials to identify who has checked what (which also marks if a feature has been reviewed yet). Any questions we may have or unanticipated results are generally included with a screenshot to show what it looked like as well as notes for what steps led to it. We generally make sure we can replicate the result as well here—but still mark it down and that we couldn’t replicate it if it appears to be a one-off or intermittent happenstance. If there’s a long list to be checked, we may also highlight results so they’re easily found—green for performed as anticipated, yellow if there are questions or concerns, and red for failed.
Once quality assurance has been completed, the developers on the project will review findings and notes. If they have questions, they’ll circle back with whomever performed the check to review together. Any problems are corrected before the features are sent to power users with the client to repeat the process.
By using these principles within our quality assurance process, we’re able to increase the odds we’ll catch any potential bug prior to deployment. And as discussed last week, being able to do that allows us to fix small problems before they become big problems, ensures we’re building what the users need, builds trust with our customers, and creates advocates for our software.
PK Information is a FileMaker-certified development agency serving the Tampa Bay, Miami Lakes, and Knoxville regions. We believe software should work the way you do, with business priorities first and technology second.
LEARN MORE
Would you like to learn more about quality assurance for FileMaker deployments? We’d love to discuss the possibilities with you!