Mobile testing types

What Is Integration Testing

Unit testing is a popular technique for designing, writing, and verifying code. But it doesn't verify the interaction between different classes, modules, and components. This is where integration testing comes into play. Integration testing is a technique where you combine different pieces of code that have been unit tested separately and then test them when used together. It occurs after unit testing but before testing the entire application end to end. It helps developers verify that the combination of individual components works as expected. 

Integration Testing vs Unit Testing

Integration testing differs from unit testing. Unit testing happens when you test a single unit of code. This is often a method or function you're implementing. You give the function the necessary input and verify the outcome. If the unit has any dependencies, you'll often mock or stub them. This ensures that you're only testing the function's code and nothing else. 

Once you've implemented multiple units, there's a system of units that depend on each other. You can now group these units together into coherent groups that represent a smaller piece of your application. When you call these subsystems in tests, you're executing integration tests. You want to verify that the units interact correctly with each other. 

Why Is Integration Testing Important?

Integration testing is a necessary step in the software development process because you can end up with a large test suite of unit tests that all pass graciously but still have a system that includes bugs. This can be due to the fact that one component calls another in the wrong way or interprets the results of another unit incorrectly. Sometimes, these issues can become apparent in unit tests, but not always. 

Integration tests may also use external services like a database. These are traditionally more difficult to unit test because their implementation is out of your control. To verify that your units of code are interacting with these external components correctly, you can use integration tests. 

Benefits and Challenges of Integration Testing

Integration testing has its advantages and disadvantages. Different contexts may require different types of tests: unit, integration, or end-to-end tests. Let's explore these benefits and challenges. 

Advantages of Integration Testing

As described, it's important that you test the interaction of your components. Even if all your unit tests pass, the different functions in your code may be interacting incorrectly, causing bugs. Integration tests allow you to catch these issues early. 

Unit tests alone can't always cover all test cases. Sometimes, you'll want or need two or more components of your code to test a scenario. Immediately, this makes them integration tests. Also, it can be useful to test integration of your code with pieces of code that are not under your control. Think about how your code makes database calls, for example, or HTTP calls to other services. 

Finally, because integration tests use more of the real components of your software, there is less need for mocking or stubbing certain components. This makes it easier to refactor the underlying code without having to change the test. 

Challenges of Integration Testing

Integration testing doesn't come without challenges. It starts with the definition of integration testing itself. Some teams will willfully or accidentally write integration tests as part of their unit tests. And they end up with automated tests that combine multiple components but are categorized as unit tests (they could be grouped in a unit testing library for example). This doesn't have to be a big issue, but it can make boundaries unclear for software developers. 

Another challenge is that adding more components to your tests often makes them more brittle and slower. This is what the test pyramid is all about. It tells you that the bulk of your complete test suite should consist of unit tests. Beyond that, you should have fewer integration tests and at the top a few end-to-end tests. The idea is that the more components you're including in your tests, the less of that type of test you should 

have. 

Integration Testing Techniques

When you design and run integration tests, you can choose between black box and white box testing. The difference comes down to how much you know about the underlying implementation of the feature. 

Black Box Testing

In black box testing, you design your tests by looking at the specifications or user requirements. You don't look at the code, and you design your tests in such a way that the test doesn't need to know about the inner workings of the system it's testing. The idea is to provide the inputs to the system and verify the outcome. 

White Box Testing

When you choose white box testing, you can look at the code to find new test cases. Also, your tests can verify certain implementation details. For example, you can check if certain components were called in the correct way with the correct parameters. 

Types of Integration Testing Approaches

There are different approaches to integration testing. Let's look at some. 

Big Bang Testing

With big bang testing, you wire up almost all the developed modules and test them as a whole. This means you test the entire application or at least most of it. It's almost end-to-end testing, and sometimes it basically is. 

The advantage of big bang testing is that is saves time and is fairly easy to set up. However, when you find errors, it can be difficult to track them down because they could originate in any part of the system. 

Incremental Testing

The opposite of big bang testing is incremental testing. In incremental testing, you start by combining two components and stubbing out any other dependencies. You can then expand and add more and more components as tests continue to pass. 

The advantage is that an error can only be caused by a limited set of components, usually the component you just added to the test. However, you do need to add fake implementations of any dependencies that you're not testing yet. 

Stubs and Drivers

Stubs and drivers are part of incremental testing. If a component depends on another component that you won't or can't include in the test, you can stub it. This means creating a fake implementation that can return the response you want in your test case. Often, you can also verify if the stub was called in the correct way. There are libraries you can use to easily create stubs in most programming languages. 

Drivers are fake implementations of components that call the components you're trying to test. They're used to call components you want to test. You need them if the calling component hasn't been implemented yet or if you don't want to include it in the test. In automated integration tests, this is usually the code of the test case. 

Bottom-Up Integration Testing

The drivers are used in bottom-up integration testing. In this technique, you start at the bottom of your call chain and work your way up. You include components that call your lowest-level component. And you let drivers call those higher-level components. As you work your way up, you can replace your drivers with the real implementations. 

Top-Down Integration Testing

The opposite of bottom-up integration testing is top-down integration testing. In this case, you work the other way around. You start at the top of the call chain, like the API for example. Components below that top-most level are replaced by stubs. As you replace your stubs with real implementations, you work your way down until you have the whole system covered. 

Sandwich Testing

Sandwich testing (or hybrid integration testing) combines the bottom-up and top-down approaches. As such it works from the UI or API down and from the lowest layer up, meeting in the middle. Sandwich testing uses both drivers and stubs. 

How to Perform Integration Testing

Before you start integration testing, make sure that your team has a plan on how it will do so. Will the developers write integration tests, or is that a task for testers? Which approach will you use (big bang, top down, bottom up, or sandwich)? Will you be writing specific test scenarios that remain fixed over time, or will you increase the test surface until you have (almost) the entire application under test? 

You will also need to plan for the time required to design, write, and perform the integration tests. Integration testing can be a time-consuming undertaking. 

If you need to integrate with external services, make sure you have approvals to set up a test environment. Also look at how this test environment can be reset to its initial state after running your tests. This ensures that a subsequent test run isn't hindered by test data from previous runs. 

Finally, look at which test tools you'll be using. You should automate as much as possible, so look into tools like Cucumber, Selenium, or Waldo. If you're looking to do integration testing with just a few components, unit testing tools like NUnit or JUnit can be sufficient too. 

Now implement your chosen approach. Discover new test cases using the black box or white box techniques and write or design them in the integration testing tool you chose. Then add them to your software development process and run the tests regularly. 

Entry and Exit Criteria for Integration Testing

Let's take a step back and look at when you can start integration testing and when you can move on to a subsequent phase. These are the entry and exit criteria for integration testing. 

Entry Criteria

As mentioned, integration testing is a form of testing that comes after unit testing. This doesn't mean you can't still write unit tests when you're working on integration tests. But it makes little sense to craft integration tests if you don't have any unit tests in place. Unit tests can cover a broader range of test cases in less time because they require less setup. 

Another entry criterion is that you must have a test environment that enables you to perform the integration tests: databases, servers, external services, and specific hardware. And finally, it must be clear to developers how they will integrate the different components. 

Exit Criteria

When can you move on to the next phase in testing? When your exit criteria are fulfilled. The next phase is often called system testing. It's where we test the complete application as it will be used by end users. 

Of course, one important exit criterion for integration testing is that your integration tests all pass. Any bugs you found during integration testing must now be fixed or added to a backlog if you decide it isn't a blocking issue. 

You can also assume that it's necessary that all test scenarios have been executed. In case a certain functionality still is (partially) untested, you should create integration tests for these scenarios first. 

Best Practices in Integration Testing

Let's finish this article by looking at some best practices for integration testing. 

A first tip would be to only start integration testing on components that have been thoroughly unit tested. Then move on to integration testing. If you find a bug, try to reproduce it in a unit test and fix it that way. Then repeat your integration test to verify that the fix resolved the issue. 

Another best practice is to automate as much as you can. Use tools to automate your integration tests and see if you can integration them in your CI/CD pipeline. If your integration tests take a long time to run, consider only running them once a day. Because integration tests can take longer to run, make sure the bulk of your testing is in unit tests. 

Focus your integration tests on the integration of the software components. Any business logic should be tested with unit tests. An integration test that fails should ideally point to integration errors or changes in the test environment (e.g., issues with external services or hardware). 

Don't wait too long with integration testing. Early feedback saves time and money. And don't stop at integration testing. Also run end-to-end tests and UI tests. 

Conclusion

Integration testing is an important part of software development. It ensures that individual components interact correctly with each other and with external services. Just like in other types of testing, you can use black box or white box techniques to design integration tests. You also saw that you can work from the bottom up, take a top-down approach, or combine the two with the sandwich technique. You can replace unfinished components with stubs or drivers. 

Integration testing comes after unit testing and before end-to-end testing. Ideally, you'll automate your integration tests so that you can add it to your CI/CD pipeline. We mentioned some tools that you can use like Cucumber, Selenium, Appium, and Waldo. If you're interested, you can schedule a product demo to see how Waldo can help with integration testing. 

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

11 Types of Automation Tests, Explained

One of the best ways of explaining DevOps is, "You build it, you run it." This also means testing your software. Automating tests free up time for other tasks, such as adding features. That's why automation is central to DevOps.

DevOps teams tend to automate as much as possible: testing, building, releasing, monitoring, alerting, infrastructure management, and much more. When it comes to automated tests, there are many options to choose from. Each has its own advantages at different stages in the software development life cycle. It's good to know where you can use which type of test and which problems they solve.

You can execute automated tests any time you like, over and over again. Without automation, this process would be tedious, error-prone, sometimes even impossible. After you make a change to the code, you can run tests to see if you broke something. Or you can run them at scheduled intervals without the need for any human intervention.

While they certainly can have issues, automated tests won't forget certain steps. They won't overlook minor errors that a human tester might miss. If set up well, they'll execute the software and verify the results the same way every time.

Some tools even allow test automation in scenarios that would be impossible to test manually. Load tests are the perfect example. Computers can simulate hundreds or thousands of requests to a web server, something that's not even possible without automation. But even simpler tests can be hard to execute manually, such as simulating certain error conditions. There are many additional benefits of automated testing, such as having a safety net to refactor, enabling developers to deliver business value faster while keeping code quality high.

Software testing has therefore become a lot easier with automated tests. In fact, we couldn't fully embrace DevOps without it. But there are different types of automated tests, each with their own use case.

Before we dive into the different kinds of automated tests, it makes sense to look at the broader context: what automated testing is, when to use it, and how these tests are formed. After covering the different types of automated tests, we'll discuss automated testing tools and how to evaluate them.

What Is Automated Testing?

Automated testing is the practice of having a machine run tests that analyze certain parts of your software.

Manual testing also runs tests to analyze the software, but in manual testing, humans run the tests. We start the software and run through the steps necessary to verify a certain outcome. We then verify that outcome and take note of it. This is often a task for dedicated testers. However, manually testing even a fairly limited piece of software still requires running through many test cases. This makes it time-consuming and subject to error. It also doesn't scale well in a DevOps world where you want to release features in small steps at regular intervals.

Automated testing doesn't remove the need for testers to manually run tests. It does, however, solve the problem of scalability. Automated tests will run much faster than manual tests, and they free up time for developers to develop.

As you'll see, automated testing tools come in many different forms. In essence, they allow a human to design a test scenario and have a tool run the tests.

An automated test consists of three basic steps:

  • Setting up the context in which the test will run
  • Executing the steps that simulate real-world behavior
  • Verifying the resulting state of the software

You only need to command the testing tool to run through the defined steps. If all goes well, the test succeeds. If the software isn't in the correct state, the tool will indicate as much and tell you what's wrong.

Most automated testing tools allow you to reuse parts of your tests in other tests. If the tool works in code, developers can easily group functionality in modules that they can reuse when running different tests. If it's a graphical testing tool, you can often define steps so that you can use them in different scenarios. Reusability adds to the scalability of automated testing. It allows teams to build tests faster, giving them feedback more about their changes more frequently. This in turn provides organizations with the ability to get their products out to end users faster and in a more reliable state.

So, when should you implement automated tests?

When to Rely On Automated Software Testing

Automated testing can be used in many areas and in many stages of software development. The most popular and well-known type of automated testing is unit testing. You'll typically set this up early in the software development process. Ideally, you'll do so at the same time the code is written. This gives developers immediate feedback about the changes they make.

But automated testing is more than unit testing. We'll go into the different types later, but certain types of tests might not execute after every change. Some test suites take so long to run that it doesn't make sense to execute them after every change. Running these tests would slow down developers rather than enable them to deliver faster.

With other test types, developers can't write them in parallel with the code. Testing the UI of a mobile app is one example. You need to finish the new features entirely before you can write tests. But once you have these tests, you can easily run them again in the future.

Finally, there are cases where it makes sense to write and execute tests after shipping the software. Performance tests are an example. You might want to test how a web application reacts under a heavy load. Another case is when users report bugs. If you can isolate the bug to a smaller unit or set of units, you can write a unit or integration test to reproduce it. By debugging this single test, you can step through your code, see what is actually executed, then identify and fix the bug.

Automated testing is useful in all stages of your software development workflow. Now let's look at the life cycle of automated tests.

Typical Life Cycle of Automated Tests

Most automated tests go through a similar life cycle. Some organizations take these steps deliberately. In others, teams might not even be aware that they're performing certain steps.

Everything starts with determining the testing scope. Will you be testing a single function in code? Or the entire application? Will you be testing functionality or something else, such as performance or memory usage?  You must also decide on your test cases. What exactly are you testing? Which steps will you execute in your application, and what will you verify?

The test automation tool you select will depend on your scope. Different tools serve different purposes. It's probably not a good idea to write performance tests with a unit testing tool, for example.

Once you've made decisions about your test cases and chosen an automation tool, you can set up your testing environment. This could be a build server, a staging environment, or a temporary container or virtual machine. Whatever you need to execute your tests on. In some cases, you can automate the setting up of the test environment in your test script.

In your test script, you define what the test must do. This happens in a language the automation tool understands. The test script can often be split into three steps:

  • Setting up prerequisites such as required test data or faking expected network calls
  • Executing the steps you want to test
  • Validating the test results

Finally, you can execute the test. But you're not done yet. You should evaluate the test results and confirm that your automated test worked as expected. If so, you can execute the test regularly. If you repeat this process for new test cases, you can consider certain steps unnecessary. You'll already have a test environment and automation tool. This makes it easier to add new test cases to your test suite.

But don't neglect test maintenance. After a while, you'll have many tests. And like with your production code, you'll have to maintain the quality of these test scripts, both technically (is the script clear, readable, and maintainable?) and functionally (does the test still have value?).

Next, let's look at the different types of tests.

Types of Automated Tests

There are many different types of automated tests, and this list may not be complete. But I believe it covers the most popular ones that should help any development team test their application thoroughly. There can also be overlap between the types of testing, while others stand apart.

Functional Testing

Functional testing is performed against finished features. When a testing team steps through the UI to execute a test scenario, they're performing functional tests. They aren't concerned with the inner workings of the software. That's why we sometime call it black box testing. We can't see what's in the box.

Functional testing is the process of making sure the application behaves as an end user would expect it to. If these tests pass, you should be able to deliver an application that solves the end user's problem as expected. If they fail, this indicates that the application would not do so, and the development team will have to fix this. It could be a small fix for a small bug, but it might also require a larger effort.

Even if all functional tests pass, this doesn't guarantee that the end user will be happy with the application. Maybe some features are missing. Perhaps a feature works but doesn't satisfy the end user enough.

Smaller types of tests like unit or integration tests can also be functional tests as long as they're testing functionality without verifying the concrete implementation.

Nonfunctional Testing

Nonfunctional testing is another broad category. It aims to test certain aspects of the application that might affect end users even if the application still includes all the functionality that the end user needs. Here are some examples.

  • Security. Is the application secure?
  • Performance. Does the application feel responsive?
  • Scalability. Can the application handle large amounts of data and/or user interactions?
  • Crash recovery. Does the application recover well from failures?

When these tests fail, it indicates that the software may work well for the end user now, but it risks having certain issues in the future that would be detrimental to the user experience. If all tests pass, there's no guarantee that everything is okay, but the application has proven to be robust.

Many of these tests only make sense once the application contains at least a few implemented features. But it's good practice to implement them early in the application design. Doing so later will be much harder. Good luck making an application scalable if it was never designed to be.

Nonfunctional tests are often executed on a regular basis but not necessarily after every change.

Unit Testing

Unit testing is conducted while writing code. It provides developers with fast feedback about the code they're writing. Instead of having to spin up the application after each change, they can run unit tests.

After some time, an extensive test suite gives developers confidence to change existing code. If they accidentally break something, the test suite will inform them, and they can fix the bug. If the tests pass, the new feature should still be tested by stepping through the application and validating the outcome, either manually or automatically.

Smoke Testing

Smoke testing is a form of functional testing where you run a small set of tests against the most important features. It can be as simple as checking if the software even starts up. The idea is to quickly verify if subsequent testing even makes sense. It can tell you, for instance, if the application is so severely broken that a release isn't even an option.

Smoke testing is performed before running more extensive tests as part of a new release. Some teams will also run their smoke tests daily. If smoke tests fail, it makes no sense to conduct further tests. If they pass, the next step in the testing process is to execute the more detailed tests.

Smoke testing is sometimes performed directly against the production environment.

Integration Testing

Integration testing is a step up after unit testing. With unit testing, developers write tests for small units of code. Integration tests verify that two or more units work together correctly. So integration tests are written after the separate units (and their tests) have been written.

If integration tests fail, it means that individual units might be implemented correctly but something is off with the communication between them.

Regression Testing

Regression testing happens after changes have been made. Tests are run again to ensure that previously working functionality continues to work. If they don't, it's called a regression, and it should be fixed.

In modern software development, teams have a suite of automated tests that they can run easily. After every change, it's easy to run unit tests, integration tests, user interface tests, etc. Any test that fails for a feature that wasn't expected to change is a regression. The team will have to investigate what caused the issue and fix it.

API Testing

API testing is the practice of testing your application's back-end API. This has traditionally been a way of testing almost the entire application, a kind of end-to-end test. But because it used to be very difficult to test user interfaces, API tests were a good alternative.

API tests are run when a feature is fully finished. If they fail, further investigation is needed as it can indicate a bug in the code. But it could also indicate an error in integration with other services, such as a database or another application. If they succeed, it means the team can be fairly confident that the feature works as intended, except perhaps for the user interface.

Security Testing

Security testing aims to uncover security issues that could put the software at risk of being hacked. Security testing comes in many forms.

  • Scanning libraries for known vulnerabilities in certain versions
  • Analyzing source code for known bad practices that expose the system to threats
  • Actively trying to hack the application, either manually or automatically

Certain types of security tests can be set up at the beginning of the project (like automatically scanning the libraries and source code). Others will need to wait until the application has at least some features finished.

Security tests that fail should be assessed as to how critical they are. Minor issues for an application that doesn't run on the public internet are not as critical as major risks in a public web application. If no issues arise, this doesn't mean your application is 100 percent secure. New vulnerabilities in software are found every day. Therefore, it's key to run your security tests regularly.

Performance Testing

You should conduct performance testing when you plan to scale the application. At the beginning of a project, you can concentrate on fleshing out the idea and adding critical features. But once you plan to have your application take on more work (more users, more requests, more calculations), performance testing will help you determine if your software can handle the load.

There are several types of performance tests. Load testing is a process of steadily increasing the load on your application until it can no longer handle it. This is the tip-off point, and it shows you how big a load your application can take.

Another type of performance testing is memory profiling. It analyzes the amount of memory used by an application and gives you pointers about where you can reduce the memory footprint or where you have a memory leak.

Improving performance is often nontrivial. Failing performance tests might require a redesign of the application's architecture.

User Interface Testing

User interface testing has been one of the hardest types of tests to automate. But we've come a long way, and UI testing is now possible for many platforms. It involves stepping through the user interface, performing actions, and verifying the result. As such, user interfacing is true end-to-end testing.

User interface testing is conducted when a feature is fully finished. These tests can then be repeated before every new release to ensure that the user experience hasn't changed if not required.

Failing tests can have many causes: a bug in a single function, an external service that isn't available, a wrong API call, or a bug in the user interface. But a slight change to the user interface can also cause tests to fail even though the user experience is still up to par. This means the team will have to change the test, not the application.

Types of Test Automation Frameworks

You've already seen that there are different types of automated tests, each with their own value. But there's another way of categorizing automated tests: test automation frameworks. These frameworks provide a methodology of designing and executing tests.

Here are some examples.

  • Linear automation frameworks. Each test case is a procedural set of steps in a script that the tool runs through step by step. The steps may have been generated by a tool that "records" user interaction, but this isn't a requirement.
  • Modular based testing framework. In this framework, the application is divided into modules that are tested separately. But it is possible to combine test scripts to test features across multiple modules.
  • Library architecture testing framework. This framework basically improves the linear framework. Common steps are grouped into single functions that can then be called throughout different tests.
  • Data-driven framework. Data-driven tests are tests that run through a series of steps but take their input and expected outcome from a dataset, such as an Excel file, for example. The test can then be run multiple times for different inputs and expected outcomes.
  • Keyword-driven framework. These tests are written in a nontechnical language that forms an abstraction over the code that actually performs a step in the test. Each keyword (or sentence) executes a step or verifies an outcome. This allows you to reuse steps over different tests or to easily change the implementation of a step without changing the keywords.
  • Hybrid testing framework. Many tools allow you to combine two or more of the above frameworks, making them hybrid testing tools.

What to Look for in Automated Testing Tools

There are probably thousands of automation testing tools out there. So how do you choose one? Of course, you'll have to choose a tool that fits the type of test you want to run and the type of application you want to test. Other criteria are things like programming language, price, support, community, ease of use, IDE integration, and which operating systems it must run on.

Let's take end-to-end testing, for example. Selenium is a popular choice. It's an open-source tool for testing web applications. What's nice is that it allows cross-browser tests, i.e., you can run the same scripts against different browsers. This would require a lot of work to perform manually.

A similar tool, Appium, exists for mobile applications. Both Selenium and Appium allow you to write tests in several popular programming languages (Java, Python, Node.js, and many more). They also enable testers to record interactions in the application and create tests from it. Although over time, developers might need to step in to create reusable blocks of interaction and verification steps, further increasing the load on developers.

Automated Tests in Summary

Automated testing provides great benefits for software development tools. It allows developers to run a big suite of tests quickly and without much effort. It also gives them confidence to modify and extend their code and release regularly. Automated testing is crucial in a DevOps culture and makes teams more agile.

Unit and integration testing are probably the most well-known and widely used types of automated testing. They're also the easiest to implement and execute. But the other types of tests provide great value as well. Choosing the right automation testing tool is key to enabling teams to test their applications without taking too much time away from developing new features. A good tool will provide ways to create tests quickly while also allowing teams to update and maintain their tests easily.

As we briefly looked at the example of end-to-end testing, it's worth mentioning that Waldo can help take pressure off the development team. It allows testers to visually create modular UI tests for mobile applications without the need to write code. It also accounts for flakiness, something end-to-end tests notoriously suffer from.

If you still have any questions about automated testing, feel free to reach out!

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.