The Agile software development process we use requires small tasks completed quickly. A downside of this is that small changes can have big impacts in other areas of the software if they are not thoroughly planned out. The most important way to ensure consistent quality is through an integrated suite of tests run after every task is completed. At Smooth Logics, we accomplish this through automated integration testing, exploratory testing, and manual testing.
Test Planning and Modeling
The first step towards creating tests is understanding the task to be completed. Modeling the system is a good way to communicate the complexity and uncover interesting scenarios to test. Flow Charts, Finite State Machines, Decision Tables, and Data Modeling are some of the models we use to describe the requirements. We then take these models and decide what the most common paths through the software are, what critical components may cause problems, and if there are other interesting scenarios at the edges of the requirements and write a test to ensure that each scenario is behaving as expected.
Once we have a scenario we want to test, we need to write some code that covers the scenario. A common pattern for creating tests is called “Arrange Act Assert”. First, we create data (Arrange) to represent the state of the system we want to test. Second, we write some test code that calls the code we want to test (Act). Finally, after the test code has completed, we check (Assert) that the outcome of the action was what we expected. At Smooth Logics, we do this in 3 main ways. Were there any exceptions (did the software crash), is the outcome of the specific action as expected, and is all the data on the database after performing the action in a consistent state.
Benefits of Integration Testing
When we add a new feature or new subsystem to the software, we want to know that we didn’t break something in a different subsystem. When we add new features and their associated tests, they become part of the testing suite that is run each time. For example, if we have a request to add a new feature in the manufacturing system, we want to know 100% that we did not break something having to do with costing. Costing can be a complex part of the software, and if we had to manually use the software to create all scenarios in which manufacturing can affect costing every time a change was made, each feature request would take months. By constantly adding to and tweaking our testing suite, we can know with a high degree of confidence that when we add a feature, everything in costing is still functioning exactly as intended. We get the equivalent of 3-4 weeks (or more) of detailed manual testing completed in about 10 minutes of running automated tests. This both increases throughput and improves quality.
Sometimes during test planning, the system is so complex that there may be interactions in other areas of the software that go undiscovered even with good system modeling. To help find these areas before releasing the software, we have another phase of testing: Exploratory Testing. We have developed a software and test architecture that allows us to create completely random scenarios and hit every aspect of the software looking for any anomalies. If we find something that violates a data rule or causes a crash, we create a new test and add this to our automated test suite so that we can permanently close that hole.
Even with the testing that is built-in and run during the development of every feature, we still want to make sure that the end-user experience is as designed. Before release, our QA department runs through every new feature in the final software. This manual testing includes checking for issues at the edges of the feature, making sure the User Interface is completed as designed, and again that there are no functionality issues. If there are any questions that something isn’t operating correctly at this level, the software designers, developers, and QA get together to resolve them before the new features are released.
Building test planning, creation, and execution into the development process ensure that as we grow the software, we maintain quality. Adding exploratory testing helps ensure that all interactions have been considered. Finally, QA manual testing ensures that COUNTERPART performs as expected for the end-user. Using an Agile software development process coupled with these levels of testing allow our team to both have high throughput and high quality.