Waldo joins Tricentis, expanding mobile testing for higher-quality mobile apps – Learn more

11 Types of Test Automation, Explained

One of the best ways of explaining DevOps is, "You build it, you run it." This also means testing your software. Automating tests free up time for other tasks, such as adding features. That's why automation is central to DevOps.

DevOps teams tend to automate as much as possible: testing, building, releasing, monitoring, alerting, infrastructure management, and much more. When it comes to automated tests, there are many options to choose from. Each has its own advantages at different stages in the software development life cycle. It's good to know where you can use which type of test and which problems they solve.

You can execute automated tests any time you like, over and over again. Without automation, this process would be tedious, error-prone, sometimes even impossible. After you make a change to the code, you can run tests to see if you broke something. Or you can run them at scheduled intervals without the need for any human intervention.

While they certainly can have issues, automated tests won't forget certain steps. They won't overlook minor errors that a human tester might miss. If set up well, they'll execute the software and verify the results the same way every time.

Some tools even allow test automation in scenarios that would be impossible to test manually. Load tests are the perfect example. Computers can simulate hundreds or thousands of requests to a web server, something that's not even possible without automation. But even simpler tests can be hard to execute manually, such as simulating certain error conditions. There are many additional benefits of automated testing, such as having a safety net to refactor, enabling developers to deliver business value faster while keeping code quality high.

Software testing has therefore become a lot easier with automated tests. In fact, we couldn't fully embrace DevOps without it. But there are different types of automated tests, each with their own use case.

Before we dive into the different kinds of automated tests, it makes sense to look at the broader context: what automated testing is, when to use it, and how these tests are formed. After covering the different types of automated tests, we'll discuss automated testing tools and how to evaluate them.

What Is Automated Testing?

Automated testing is the practice of having a machine run tests that analyze certain parts of your software.

Manual testing also runs tests to analyze the software, but in manual testing, humans run the tests. We start the software and run through the steps necessary to verify a certain outcome. We then verify that outcome and take note of it. This is often a task for dedicated testers. However, manually testing even a fairly limited piece of software still requires running through many test cases. This makes it time-consuming and subject to error. It also doesn't scale well in a DevOps world where you want to release features in small steps at regular intervals.

Automated testing doesn't remove the need for testers to manually run tests. It does, however, solve the problem of scalability. Automated tests will run much faster than manual tests, and they free up time for developers to develop.

As you'll see, automated testing tools come in many different forms. In essence, they allow a human to design a test scenario and have a tool run the tests.

An automated test consists of three basic steps:

  • Setting up the context in which the test will run
  • Executing the steps that simulate real-world behavior
  • Verifying the resulting state of the software

You only need to command the testing tool to run through the defined steps. If all goes well, the test succeeds. If the software isn't in the correct state, the tool will indicate as much and tell you what's wrong.

Most automated testing tools allow you to reuse parts of your tests in other tests. If the tool works in code, developers can easily group functionality in modules that they can reuse when running different tests. If it's a graphical testing tool, you can often define steps so that you can use them in different scenarios. Reusability adds to the scalability of automated testing. It allows teams to build tests faster, giving them feedback more about their changes more frequently. This in turn provides organizations with the ability to get their products out to end users faster and in a more reliable state.

So, when should you implement automated tests?

When to Rely On Automated Software Testing

Automated testing can be used in many areas and in many stages of software development. The most popular and well-known type of automated testing is unit testing. You'll typically set this up early in the software development process. Ideally, you'll do so at the same time the code is written. This gives developers immediate feedback about the changes they make.

But automated testing is more than unit testing. We'll go into the different types later, but certain types of tests might not execute after every change. Some test suites take so long to run that it doesn't make sense to execute them after every change. Running these tests would slow down developers rather than enable them to deliver faster.

With other test types, developers can't write them in parallel with the code. Testing the UI of a mobile app is one example. You need to finish the new features entirely before you can write tests. But once you have these tests, you can easily run them again in the future.

Finally, there are cases where it makes sense to write and execute tests after shipping the software. Performance tests are an example. You might want to test how a web application reacts under a heavy load. Another case is when users report bugs. If you can isolate the bug to a smaller unit or set of units, you can write a unit or integration test to reproduce it. By debugging this single test, you can step through your code, see what is actually executed, then identify and fix the bug.

Automated testing is useful in all stages of your software development workflow. Now let's look at the life cycle of automated tests.

Typical Life Cycle of Automated Tests

Most automated tests go through a similar life cycle. Some organizations take these steps deliberately. In others, teams might not even be aware that they're performing certain steps.

Everything starts with determining the testing scope. Will you be testing a single function in code? Or the entire application? Will you be testing functionality or something else, such as performance or memory usage?  You must also decide on your test cases. What exactly are you testing? Which steps will you execute in your application, and what will you verify?

The test automation tool you select will depend on your scope. Different tools serve different purposes. It's probably not a good idea to write performance tests with a unit testing tool, for example.

Once you've made decisions about your test cases and chosen an automation tool, you can set up your testing environment. This could be a build server, a staging environment, or a temporary container or virtual machine. Whatever you need to execute your tests on. In some cases, you can automate the setting up of the test environment in your test script.

In your test script, you define what the test must do. This happens in a language the automation tool understands. The test script can often be split into three steps:

  • Setting up prerequisites such as required test data or faking expected network calls
  • Executing the steps you want to test
  • Validating the test results

Finally, you can execute the test. But you're not done yet. You should evaluate the test results and confirm that your automated test worked as expected. If so, you can execute the test regularly. If you repeat this process for new test cases, you can consider certain steps unnecessary. You'll already have a test environment and automation tool. This makes it easier to add new test cases to your test suite.

But don't neglect test maintenance. After a while, you'll have many tests. And like with your production code, you'll have to maintain the quality of these test scripts, both technically (is the script clear, readable, and maintainable?) and functionally (does the test still have value?).

Next, let's look at the different types of tests.

An image outlining functional and non functional tests on a testing pyramid

Types of Automated Tests

There are many different types of automated tests, and this list may not be complete. But I believe it covers the most popular ones that should help any development team test their application thoroughly. There can also be overlap between the types of testing, while others stand apart.

Functional Testing

Functional testing is performed against finished features. When a testing team steps through the UI to execute a test scenario, they're performing functional tests. They aren't concerned with the inner workings of the software. That's why we sometime call it black box testing. We can't see what's in the box.

Functional testing is the process of making sure the application behaves as an end user would expect it to. If these tests pass, you should be able to deliver an application that solves the end user's problem as expected. If they fail, this indicates that the application would not do so, and the development team will have to fix this. It could be a small fix for a small bug, but it might also require a larger effort.

Even if all functional tests pass, this doesn't guarantee that the end user will be happy with the application. Maybe some features are missing. Perhaps a feature works but doesn't satisfy the end user enough.

Smaller types of tests like unit or integration tests can also be functional tests as long as they're testing functionality without verifying the concrete implementation.

Nonfunctional Testing

Nonfunctional testing is another broad category. It aims to test certain aspects of the application that might affect end users even if the application still includes all the functionality that the end user needs. Here are some examples.

  • Security. Is the application secure?
  • Performance. Does the application feel responsive?
  • Scalability. Can the application handle large amounts of data and/or user interactions?
  • Crash recovery. Does the application recover well from failures?

When these tests fail, it indicates that the software may work well for the end user now, but it risks having certain issues in the future that would be detrimental to the user experience. If all tests pass, there's no guarantee that everything is okay, but the application has proven to be robust.

Many of these tests only make sense once the application contains at least a few implemented features. But it's good practice to implement them early in the application design. Doing so later will be much harder. Good luck making an application scalable if it was never designed to be.

Nonfunctional tests are often executed on a regular basis but not necessarily after every change.

Unit Testing

Unit testing is conducted while writing code. It provides developers with fast feedback about the code they're writing. Instead of having to spin up the application after each change, they can run unit tests.

After some time, an extensive test suite gives developers confidence to change existing code. If they accidentally break something, the test suite will inform them, and they can fix the bug. If the tests pass, the new feature should still be tested by stepping through the application and validating the outcome, either manually or automatically.

Smoke Testing

Smoke testing is a form of functional testing where you run a small set of tests against the most important features. It can be as simple as checking if the software even starts up. The idea is to quickly verify if subsequent testing even makes sense. It can tell you, for instance, if the application is so severely broken that a release isn't even an option.

Smoke testing is performed before running more extensive tests as part of a new release. Some teams will also run their smoke tests daily. If smoke tests fail, it makes no sense to conduct further tests. If they pass, the next step in the testing process is to execute the more detailed tests.

Smoke testing is sometimes performed directly against the production environment.

Integration Testing

Integration testing is a step up after unit testing. With unit testing, developers write tests for small units of code. Integration tests verify that two or more units work together correctly. So integration tests are written after the separate units (and their tests) have been written.

If integration tests fail, it means that individual units might be implemented correctly but something is off with the communication between them.

Regression Testing

Regression testing happens after changes have been made. Tests are run again to ensure that previously working functionality continues to work. If they don't, it's called a regression, and it should be fixed.

In modern software development, teams have a suite of automated tests that they can run easily. After every change, it's easy to run unit tests, integration tests, user interface tests, etc. Any test that fails for a feature that wasn't expected to change is a regression. The team will have to investigate what caused the issue and fix it.

API Testing

API testing is the practice of testing your application's back-end API. This has traditionally been a way of testing almost the entire application, a kind of end-to-end test. But because it used to be very difficult to test user interfaces, API tests were a good alternative.

API tests are run when a feature is fully finished. If they fail, further investigation is needed as it can indicate a bug in the code. But it could also indicate an error in integration with other services, such as a database or another application. If they succeed, it means the team can be fairly confident that the feature works as intended, except perhaps for the user interface.

Security Testing

Security testing aims to uncover security issues that could put the software at risk of being hacked. Security testing comes in many forms.

  • Scanning libraries for known vulnerabilities in certain versions
  • Analyzing source code for known bad practices that expose the system to threats
  • Actively trying to hack the application, either manually or automatically

Certain types of security tests can be set up at the beginning of the project (like automatically scanning the libraries and source code). Others will need to wait until the application has at least some features finished.

Security tests that fail should be assessed as to how critical they are. Minor issues for an application that doesn't run on the public internet are not as critical as major risks in a public web application. If no issues arise, this doesn't mean your application is 100 percent secure. New vulnerabilities in software are found every day. Therefore, it's key to run your security tests regularly.

Performance Testing

You should conduct performance testing when you plan to scale the application. At the beginning of a project, you can concentrate on fleshing out the idea and adding critical features. But once you plan to have your application take on more work (more users, more requests, more calculations), performance testing will help you determine if your software can handle the load.

There are several types of performance tests. Load testing is a process of steadily increasing the load on your application until it can no longer handle it. This is the tip-off point, and it shows you how big a load your application can take.

Another type of performance testing is memory profiling. It analyzes the amount of memory used by an application and gives you pointers about where you can reduce the memory footprint or where you have a memory leak.

Improving performance is often nontrivial. Failing performance tests might require a redesign of the application's architecture.

User Interface Testing

User interface testing has been one of the hardest types of tests to automate. But we've come a long way, and UI testing is now possible for many platforms. It involves stepping through the user interface, performing actions, and verifying the result. As such, user interfacing is true end-to-end testing.

User interface testing is conducted when a feature is fully finished. These tests can then be repeated before every new release to ensure that the user experience hasn't changed if not required.

Failing tests can have many causes: a bug in a single function, an external service that isn't available, a wrong API call, or a bug in the user interface. But a slight change to the user interface can also cause tests to fail even though the user experience is still up to par. This means the team will have to change the test, not the application. There is also specific testing for iOS and Android.

Types of Test Automation Frameworks

You've already seen that there are different types of automated tests, each with their own value. But there's another way of categorizing automated tests: test automation frameworks. These frameworks provide a methodology of designing and executing tests.

Here are some examples.

  • Linear automation frameworks. Each test case is a procedural set of steps in a script that the tool runs through step by step. The steps may have been generated by a tool that "records" user interaction, but this isn't a requirement.
  • Modular based testing framework. In this framework, the application is divided into modules that are tested separately. But it is possible to combine test scripts to test features across multiple modules.
  • Library architecture testing framework. This framework basically improves the linear framework. Common steps are grouped into single functions that can then be called throughout different tests.
  • Data-driven framework. Data-driven tests are tests that run through a series of steps but take their input and expected outcome from a dataset, such as an Excel file, for example. The test can then be run multiple times for different inputs and expected outcomes.
  • Keyword-driven framework. These tests are written in a nontechnical language that forms an abstraction over the code that actually performs a step in the test. Each keyword (or sentence) executes a step or verifies an outcome. This allows you to reuse steps over different tests or to easily change the implementation of a step without changing the keywords.
  • Hybrid testing framework. Many tools allow you to combine two or more of the above frameworks, making them hybrid testing tools.

What to Look for in Automated Testing Tools

There are probably thousands of automation testing tools out there. So how do you choose one? Of course, you'll have to choose a tool that fits the type of test you want to run and the type of application you want to test. Other criteria are things like programming language, price, support, community, ease of use, IDE integration, and which operating systems it must run on.

Let's take end-to-end testing, for example. Selenium is a popular choice. It's an open-source tool for testing web applications. What's nice is that it allows cross-browser tests, i.e., you can run the same scripts against different browsers. This would require a lot of work to perform manually.

A similar tool, Appium, exists for mobile applications. Both Selenium and Appium allow you to write tests in several popular programming languages (Java, Python, Node.js, and many more). They also enable testers to record interactions in the application and create tests from it. Although over time, developers might need to step in to create reusable blocks of interaction and verification steps, further increasing the load on developers.

Automated Tests in Summary

Automated testing provides great benefits for software development tools. It allows developers to run a big suite of tests quickly and without much effort. It also gives them confidence to modify and extend their code and release regularly. Automated testing is crucial in a DevOps culture and makes teams more agile.

Unit and integration testing are probably the most well-known and widely used types of automated testing. They're also the easiest to implement and execute. But the other types of tests provide great value as well. Choosing the right automation testing tool is key to enabling teams to test their applications without taking too much time away from developing new features. A good tool will provide ways to create tests quickly while also allowing teams to update and maintain their tests easily.

As we briefly looked at the example of end-to-end testing, it's worth mentioning that Waldo can help take pressure off the development team. It allows testers to visually create modular UI tests for mobile applications without the need to write code. It also accounts for flakiness, something end-to-end tests notoriously suffer from.

If you still have any questions about automated testing, feel free to reach out!

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.