In Part 2 of this series, we discussed the importance of user state management and how we ensured a fresh user account for each test scenario within Waldo on Waldo (WoW).
While this approach eliminated potential issues and side effects, it required us to recreate all the necessary data for each test, especially for complex scenarios.
In this article, we'll explore how we addressed this challenge and optimized our test suite for efficient execution.
The Need for Data Cloning
Efficient testing requires creating multiple objects and scenarios, in our case: application versions, tests, and runs. However, individually setting up each scenario can be time-consuming, especially for complex test cases. Let's illustrate this with an example: reproducing a dependency error within our test builder.
Traditionally, we had to follow a lengthy process involving creating and modifying multiple tests, launching validation suites, and updating dependencies. This could easily take between 5 to 10 minutes per scenario. Time is precious, and we needed a better solution.
Using pre-set accounts seemed like an option, but it posed the risk of disrupting their state over time.
Therefore, we decided to combine the best of both worlds: have pre-set accounts, that are purely read-only. Once they’re set, no one is allowed to use them. And now, we allow new accounts to clone the state from those preset accounts.
Cloning Endpoints: A Life Saver
Our data cloning endpoints enabled us to clone various elements, including applications, app versions (builds), tests, and runs. When cloning a run, we first cloned the underlying tests. Similarly, to clone a test, we first cloned the associated app versions, and so on. This hierarchical cloning mechanism proved to be a game-changer.
With these cloning endpoints in place, we could manually create complex test scenarios in a sample application. Then, using deep links, we simply cloned that data into our WoW tests. This streamlined the process and allowed us to test a wide range of use cases and edge cases efficiently.
Implementing these endpoints took us less than a week, and proved to be a great investment.
Testing Error Scenarios
When it comes to mobile app testing, various scenarios are considered to ensure the application functions as intended. The most common types of scenarios include the happy path, negative testing, and potentially others. The happy path refers to testing the application under normal, expected conditions, where users follow the typical flow of actions and interactions.
On the other hand, negative testing involves intentionally testing the application's ability to handle unexpected or erroneous inputs, such as invalid data or incorrect user actions. Negative testing is often trickier because it requires specific setups to reproduce errors, such as manipulating network conditions or simulating unusual device states.
To test our error scenarios, we employed some additional tricks. For example, to simulate different error cases within the test builder, we utilized app version cloning with special package names.
For instance, we used package names like "cannot.install.app," which triggered the "cannot-install-app" error banner. This approach enabled us to thoroughly test various error scenarios in both the test builder and our live session component.
Optimizing Test Suite Performance
In Part 4 of this series, we will focus on optimizing our test suite to run as quickly as possible while minimizing the resources required from our infrastructure. We'll explore techniques and strategies that ensure efficient execution and help us maintain a robust testing process.