There's no denying software testing is crucial if you want to deliver high-quality code for your clients. However, learning about testing can feel overwhelming; there are so many types of testing, and the already colossal lexicon just keeps growing. In this post, we help you in your testing learning journey by covering the "regression testing vs. integration testing" dilemma.
We open the post with some fundamentals, defining each testing type. Then, we walk you through some of the main differences between integration testing and regression testing, showing how they compare across some crucial criteria. Finally, we give you a verdict, stating when to use which in your testing strategy.
Let's get started.
Regression Testing vs. Integration Testing: The Fundamentals
As promised, let's begin with some basics about the "regression testing vs. integration testing" discussion.
What Is Regression Testing?
To define regression testing, you first need to learn what a regression is. As a software engineer, you're probably familiar with the following scenario: you add a new feature to an application, test it, and it seems to be working just fine. You're satisfied with the result, but your happiness doesn't last—another feature stops working. Or maybe an old bug resurfaces.
This is what we call a regression: the system has regressed to a previous, undesirable state. That's where regression testing comes in handy. After changing the codebase, you rerun an extensive suite of tests to confirm that all of the other features are still working and that no known defects have come back. When we say regression testing nowadays, we typically mean automated regression testing. It'd be virtually impossible to ship code at a fast pace if we had to manually retest all of our code after each change, no matter how small.
Remember that, more than a type of testing—such as unit testing, end-to-end testing, and so on—regression testing is a way of using your existing tests.
What Is Integration Testing?
I'll define integration testing, funnily enough, by defining unit testing. Though that might sound weird, I think it's easier to define integration testing by drawing a comparison with its counterpart.
So, a unit test is a concise, small and isolated test. It checks whether a small portion of the codebase—a "unit"—works as expected in complete isolation. That is, a proper unit test usually doesn't have any external dependencies. It doesn't talk to the database, the file system, or a message broker, nor does it cause any side effects. These properties of unit tests ensure they're deterministic, fast, and give precise feedback about code issues. However, since units in actual code have dependencies between themselves and the external world, unit tests often fall short of catching some defects.
That's where integration testing comes in handy. Integration tests integrate multiple units or components. Most importantly, they're not shy about talking to the real-world external dependencies we cited before.
Because of that, integration tests tend to be slower than unit tests and require a more elaborate setup. They also need a more labor-prone teardown process since any remains from a test run could influence later test runs.
Simply put, integration tests are more complicated and more expensive than unit tests. But they're also more realistic since they're closer to how real users use applications. Thus, they're able to catch defects that unit tests never could.
What Is the Difference Between Integration Testing and Regression Testing?
You've just seen the definitions of both regression testing and integration testing. Now it's time to walk you through a list of factors and see how the two testing types compare across each criterion.
Let's start by comparing the goals of regression testing and integration testing.
Regression testing, as you've seen, verifies that you didn't introduce regressions after making changes to the software. Regression testing doesn't necessarily ensure that your recent change works. Instead, its goal is to ensure that you didn't inadvertently change the application's behavior.
Integration testing, on the other hand, aims to ensure the quality of your recent change. When you add a new feature to an application, you'll typically write unit tests for it. But when applicable, you'll also write some integration tests. These are typically fewer in number than unit tests—remember, they're slow and expensive!—and cover only some of the critical paths in the application.
In short, integration testing aims to verify your feature works as expected, not only in isolation but also in integration with other components and external dependencies.
The Software Development Life Cycle
Our next factor is related to time. When are these testing activities done in the software life cycle? Here we have a stark contrast between the two types of testing.
Remember that regression testing is primarily a way of running tests. It's not a specific test in and of itself. The idea of integration tests is to verify the application still works after a change. So that means, of course, you already must have an application! The execution of tests in a "regression testing" mode requires you to have a relevant part of the application already built.
When it comes to integration testing, things can be more nuanced. You can certainly write integration tests after the fact—i.e., after you implement your feature. But you can also write them before coding the feature.
If you use methodologies such as acceptance test-driven development (ATDD) or behavior-driven development (BDD), you'll start the development by writing failing tests that describe the application's behavior from the user's point of view.
You can use integration-style tests for ATDD and BDD, which is a common way for teams to develop software from the outside in.
When it comes to test coverage, how do these two forms of testing compare? The answer here is, again, a clear contrast.
When it comes to regression testing, you want to achieve coverage as high as possible. Think about it: you want your regression suite to act as your safety net, giving you confidence that your latest change didn't break anything.
With integration tests, things are different. You'll remember that these tests are slow and expensive. They're important, but you really don't want to have more of them than what's strictly needed. Thus, when it comes to integration testing, covering the most critical paths should be your main goal rather than achieving comprehensive coverage.
Regression Testing vs. Integration Testing: The Verdict
This post walked you through the definitions of integration and regression testing. Then, we covered their main differences, showing how they compare across three crucial factors. Now it's time to give you our verdict comparing these two concepts. Our conclusion is one that you, an astute reader, are likely to have reached yourself: these two aren't at odds at all!
Since regression testing is mostly a way of testing, virtually all types of testing can be used as regression tests, including integration tests.
Whether integration tests are included in the regression tests suite depends on several factors, including how complex their setup is, how slow they are, and how expensive they are. If their costs are relatively low and they're fast enough, there's no reason why they can't be a part of the regression suite.
Truly slow and expensive tests—such as end-to-end tests—are better left out of the suite that runs on the CI server after each commit. A build that runs daily or twice daily is probably sufficient for those kinds of tests, since the feedback provided by the unit and integration tests should be abundant.
Testing is an important part of developing code, regardless of which test or method you use. Waldo can help you test your product with fast, automated tests. Start a free trial.