What Is Integration Testing?

What Is Integration Testing

Unit testing is a popular software testing technique for designing, writing, and verifying code. But it doesn't verify the interaction between different modules, classes and components. This is where integration testing comes into play. Integration testing is a technique where you combine different pieces of code that have been unit tested separately and then test them when used together. It occurs after unit testing but before testing the entire application end to end. It helps developers verify that the combination of individual components works as expected. 

Integration Testing vs Unit Testing

Integration testing (sometimes also called system integration testing) is a different type of software testing than unit testing. Unit testing happens when you test a single unit of code. This is often a method or function you're implementing. You give the function the necessary input and verify the outcome. If the unit has any dependencies, you'll often mock or stub them. This ensures that you're only testing the function's code and nothing else. 

Once you've implemented multiple units, there's a system of units that depend on each other. You can now group these units together into coherent groups that represent a smaller piece of your software application. When you call these subsystems in tests, you're executing integration tests. You want to verify that the units interact correctly with each other. 

Why Is Integration Testing Important?

Integration testing is a necessary step in an agile or devops software development process because you can end up with a large test suite of unit tests that all pass graciously but still have a system that includes bugs. This can be due to the fact that one component calls another in the wrong way or interprets the results of another unit incorrectly. Sometimes, these issues can become apparent in unit tests, but not always. 

Integration tests may also use external services like a database. These are traditionally more difficult to unit test because their implementation is out of your control. To verify that your units of code are interacting with these external components correctly, you can use integration tests. 

Benefits and Challenges of Integration Testing

Integration testing has its advantages and disadvantages. Different contexts may require different types of tests: unit, integration, or end-to-end tests. Let's explore these benefits and challenges. 

Advantages of Integration Testing

As described, it's important that you test the interaction of your components. Even if all your unit tests pass, the different functions in your code may be interacting incorrectly, causing bugs. Integration tests allow you to catch these issues early. 

Unit tests alone can't always cover all test cases. Sometimes, you'll want or need two or more components of your code to test a scenario. Immediately, this makes them integration tests. Also, it can be useful to test integration of your code with pieces of code that are not under your control. Think about how your code makes database calls, for example, or HTTP calls to other services. 

Finally, because integration tests use more of the real components of your software, there is less need for mocking or stubbing certain components. This makes it easier to refactor the underlying code without having to change the test. 

Challenges of Integration Testing

Integration testing doesn't come without challenges. It starts with the definition of integration testing itself. Some teams will willfully or accidentally write integration tests as part of their unit tests. And they end up with automated tests that combine multiple components but are categorized as unit tests (they could be grouped in a unit testing library for example). This doesn't have to be a big issue, but it can make boundaries unclear for software developers. 

Another challenge is that adding more components to your tests often makes them more brittle and slower. This is what the test pyramid is all about. It tells you that the bulk of your complete test suite should consist of unit tests. Beyond that, you should have fewer integration tests and at the top a few end-to-end tests. The idea is that the more components you are including in your tests, the less of that type of test you should have.

Integration Testing Techniques

When you design and run integration tests, you can choose between black box and white box testing strategies. The difference comes down to how much you know about the underlying implementation of the feature. 

Black Box Testing

In black box testing, you design your tests by looking at the specifications or user requirements. You don't look at the code, and you design your tests in such a way that the test doesn't need to know about the inner workings of the system it's testing. The idea is to provide the inputs to the system and verify the outcome. 

White Box Testing

When you choose white box testing, you can look at the code to find new test cases. Also, your tests can verify certain implementation details. For example, you can check if certain components were called in the correct way with the correct parameters. 

Types of Integration Testing Approaches

There are different approaches to integration testing. Let's look at some. 

Big Bang Testing

With big bang testing, you wire up almost all the developed software modules and test them as a whole. This means you test the entire software system or at least most of it. It's almost end-to-end testing, and sometimes it basically is. 

The advantage of big bang testing is that it saves time and is fairly easy to set up, especially for small systems. However, when you find errors, it can be difficult to track them down because they could originate in any part of the system. 

Incremental Testing

The opposite of big bang testing is incremental testing. In an incremental testing process, you start by combining two components and stubbing out any other dependencies. You can then expand step by step and add more and more components as tests continue to pass. 

The advantage is that an error can only be caused by a limited set of components, usually the component you just added to the test. However, you do need to add fake implementations of any dependencies that you're not testing yet. 

Stubs and Drivers

Stubs and drivers are part of incremental testing. If a component depends on another component that you won't or can't include in the test, you can stub it. This means creating a fake implementation that can return the response you want in your test case. Often, you can also verify if the stub was called in the correct way. There are libraries you can use to easily create stubs in most programming languages. 

Drivers are fake implementations of components that call the components you're trying to test. They're used to call the lower-level modules you want to test. You need them if the calling component hasn't been implemented yet or if you don't want to include it in the test. In automated integration tests, this is usually the code of the test case. 

Bottom-Up Integration Testing

The drivers are used in bottom-up integration testing. In this technique, you start at the bottom of your call chain and work your way up. You include components that call your lowest-level component. And you let drivers call those higher-level components. As you work your way up, you can replace your drivers with the real implementations. 

Top-Down Integration Testing

The opposite of the bottom-up approach is top-down integration testing. In this case, you work the other way around. You start at the top of the call chain, like the API for example. Components below that top-most level are replaced by stubs. As you replace your stubs with real implementations, you work your way down until you have the whole system covered. 

Sandwich Testing

Sandwich testing (or hybrid integration testing) combines the bottom-up and top-down approaches. As such it works from the user interface or API down and from the lowest layer up, meeting in the middle. Sandwich testing uses both drivers and stubs. 

How to Perform Integration Testing

Before you start integration testing, make sure that your team has an integration testing plan. Will the programmers write integration tests, or is that a task for testers? Which approach will you use (big bang, top down, bottom up, or sandwich)? Will you be writing specific test scenarios that remain fixed over time, or will you increase the test surface until you have (almost) the entire application under test? 

You will also need to plan for the time required to design, write, and perform the integration tests. Integration testing can be a time-consuming undertaking. 

If you need to integrate with external services, make sure you have approvals to set up a test environment. Also look at how this test environment can be reset to its initial state after running your tests. This ensures that a subsequent test run isn't hindered by test data from previous runs. 

Finally, look at which test tools you'll be using. You should automate as much as possible, so look into tools like Cucumber, Selenium, or Waldo. If you're looking to do integration testing with just a few components, unit testing tools like NUnit or JUnit can be sufficient too. 

Now implement your chosen approach. Discover new test cases using the black box or white box techniques and write or design them in the integration testing tool you chose. Then add them to your software development process and run the tests regularly (i.e. run them on your continuous integration server). 

Entry and Exit Criteria for Integration Testing

Let's take a step back and look at when you can start integration testing and when you can move on to a subsequent phase. These are the entry and exit criteria for integration testing. 

Entry Criteria

As mentioned, integration testing is a form of testing that comes after unit testing. This doesn't mean you can't still write unit tests when you're working on integration tests. But it makes little sense to craft integration tests if you don't have any unit tests in place. Unit tests can cover a broader range of test cases in less time because they require less setup. 

Another entry criterion is that you must have a test environment that enables you to perform the integration tests: databases, servers, external services, and specific hardware. And finally, it must be clear to developers how they will integrate the different components. 

Exit Criteria

When can you move on to the next phase in testing? When your exit criteria are fulfilled. The next phase is often called system testing. It's where we test the complete application as it will be used by end users. 

Of course, one important exit criterion for integration testing is that your integration tests all pass. Any bugs you found during integration testing must now be fixed or added to a backlog if you decide it isn't a blocking issue. 

You can also assume that it's necessary that all test scenarios have been executed. In case a certain functionality still is (partially) untested, you should create integration tests for these scenarios first. 

Best Practices in Integration Testing

Let's finish this article by looking at some best practices for integration testing. 

A first tip would be to only start integration testing on components that have been thoroughly unit tested. Then move on to integration testing. If you find a bug, try to reproduce it in a unit test and fix it that way. Then repeat your integration test to verify that the fix resolved the issue. 

Another best practice is to automate as much as you can. Use tools to automate your integration tests and see if you can integration them in your CI/CD pipeline. If your integration tests take a long time to run, consider only running them once a day. Because integration tests can take longer to run, make sure the bulk of your testing is in unit tests. 

Focus your integration tests on the integration of the software components. Any business logic should be tested with unit tests. An integration test that fails should ideally point to integration errors or changes in the test environment (e.g., issues with external services or hardware). 

Don't wait too long with integration testing. Early feedback saves time and money. And don't stop at integration testing. Also run end-to-end tests and UI tests. 

Conclusion

Integration testing is an important part of software engineering. It ensures that individual components interact correctly with each other and with external services. Just like in other types of testing, you can use black box or white box techniques to design integration tests. You also saw that you can work from the bottom up, take a top-down approach, or combine the two with the sandwich technique. You can replace unfinished components with stubs or drivers. 

Integration testing comes after unit testing and before end-to-end testing. Ideally, you'll automate your integration tests so that you can add it to your CI/CD pipeline. We mentioned some tools that you can use like Cucumber, Selenium, Appium, and Waldo.

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.