Working in mobile development as a backend engineer, and as an engineering leader, I was finding myself coming back to the same problem: the boolean results generated by my team’s tests were difficult to understand and act on.
Your test passes: Do you know exactly what passed? What is your remaining exposure? Is the build ready or is there a scenario you have not thought to test?
Your test fails: is it a true failure? What is causing it? Is it failing on multiple device types or just one? Were the test conditions appropriate?
Mobile teams in every industry are spending time, budget, and headcount on building and maintaining testing solutions that ultimately fail to provide reliable or actionable results.
This should be the ultimate goal of any healthy testing process, because good results tell you if your testing process is working. Not only in the sense that they generate a pass/fail verdict, but that they tell you exactly what you are accomplishing in that test so you can understand the coverage you have.
Yet, this is the part of testing (almost) no solution gets right.
Manual testing and scripting cannot provide clarity
Manual testing fails to deliver on this because itrequires you to trust that the thumbs and devices moving through your user flows are testing what you asked them to. This is an exercise in blind trust when you have no tangible artifacts generated through this process to help you recreate issues yourself.
Scripting offers the benefits of automation, but when it comes to results all you get is a boolean return. That result doesn't tell you what is actually happening on the screen itself, it just tells you that your test passed or failed with little to no context.
For example: if a script is able to carry out a command, even if your screen is a complete mess, your test may still pass. This makes even "pass" results incomplete.
Additionally, the script itself must be extremely prescriptive: requiring you to add more and more commands to a test to ensure it is thorough. This makes them harder to maintain over time.
Waldo is focused on providing transparent and reliable results
We started Waldo to provide mobile teams with all of the necessary controls, alerts, and information to know that when a test passes, they are ready to release.
Waldo not only simplifies the creation and execution of your mobile tests, it provides the information required to fully understand (and subsequently trust) your results. We are able to provide this assurance because we fundamentally changed the way your build is tested and evaluated.
Compare new builds with expected behaviors
Waldo tracks the health of your build over time, and is able to compare the behavior of a new build to a build that you put forward as the “expected behavior” of your app. Instead of anticipating what could go wrong and trying to check for those errors, you are simply providing what “correct” looks like, and asking if it can be replicated.
Waldo checks the structure of each screen as compared to the previous build, notes any discrepancies between the two, and verifies the ability to complete actions required to progress the test.
When a test fails in Waldo: you know where it failed, you know the steps leading up to that failure, you know how that failure presents itself, you have a full video replay of the test, the logs, and the network requests.
When a test passes in Waldo: you can have complete peace of mind knowing that your build performed as expected. No gaps, no hidden bug you didn't think to check for.
This kind of testing does not slow developers down: it speeds them up by giving them clear, transparent test results that make it easy to know when a build is ready to release.
Want to see these results for yourself? Waldo has a free, browser based testing tool called Sessions you can try now. Upload your own app or use the demo apps available to see Waldo in action.
Interested in learning more about how Waldo can help you unlock these results and automate your test suite? Reach out to our team.