For me, unit testing is the most essential of all test levels. I also look at it first when I start a new consulting project. Why? This is where the robustness of the application comes from. Without sufficient unit tests, stable integration, system and acceptance tests are difficult to achieve. Good unit tests help massively in ensuring that the code remains consistent. Problems with the system becoming unstable due to changes or unexpected side effects can be minimized with unit tests in advance. And they are an excellent indicator of the quality of the architecture and design. A "unit testing is not possible here" strongly suggests that potential trouble lies dormant. At the latest when refactoring this area, quality assurance in the form of tests is then missing. Unit tests are an ideal feedback mechanism in software development.
If you change or extend the code, unit tests provide immediate feedback as to whether you have broken something. This is again the reason why unit tests have to be executable quickly. "If this is not possible, this is again an indication to take another look at the architecture and the design.
Definition unit testing
In the literature, the test level is defined as a component test and frequently appears under this term. Module test is also a common synonym. Of course, this also depends somewhat on the programming language and the context. However, in project practice, unit testing has become widely accepted. I can't remember any project in the last 20 years where this test level was not called unit testing .
The ISTQB defines unit testing as: "A test level focused on a single hardware or software component."
In general you can say: unit testing is the test of the smallest unit. Whether this is called a module, class, statement or otherwise.
The test basis for unit testing is all information that describes this small function block. This can be information derived from the design or architecture, or parts of a user story or requirement. Sometimes there are also component specifications or models that describe the functionality of the unit.
Test case creation for unit tests
Structured test design methods such as equivalence class analysis, limit value analysis or decision tables are particularly suitable for creating test cases for unit or module testing. Also combinatorics like pairwise or classification tree. Compared to other test levels such as system testing or acceptance testing, the advantage here is that value ranges, etc. are much more concrete, which facilitates the derivation of test cases.
Test Driven Development (TDD) has become very popular through agile software development. Here, the program code is not written first and then the test code, but vice versa. The test is written first, it fails, then the functional code is developed so that the test case becomes "green". Further development is done in this structure. TDD has many advantages, especially regarding the focus on tests, but also a few disadvantages.
The test object for the unit testing is the smallest unit of the respective programming language that can be tested in a meaningful way. The point is to test this unit for itself, without interaction with other such units (classes, modules, etc.).
Test objectives of the unit test
The goal of unit testing is to test the functional and also non-functional aspects at the lowest level. This has several advantages:
- The tests create a robust foundation on which the other test levels and the entire architecture can be built.
- Errors that occur in unit testing can be quickly and specifically corrected due to the proximity to the code.
- Fast feedback mechanism when changes are made to the software.
Very often the topic of code coverage comes up here. What is code coverage? Code coverage shows how much of the software source code is used when executing the test cases. Code coverage can target different parts of the code. That is why we distinguish between instruction coverage or branch coverage, for example.
What is code overlap?
Code coverage is also often used as an end-of-test criterion or definition of done. Then formulations such as "Unit tests must have 80% code coverage" can be found. Defining and measuring such numbers is definitely an interesting aspect. However, this results in a serious problem: The quality of the test cases can drop, since the attempt is made to achieve the coverage with test cases that are as simple as possible. All special cases or negative tests (e.g. with other test data) are omitted, since they do not or hardly contribute to the increase of the test coverage.
Therefore, these benchmarks like 80% code coverage should always be taken with a grain of salt. They can help create awareness for unit testing, but they can also lead to more negligence.
Test environment for unit tests
There are usually two test environments for unit tests. On the one hand, the developer's development environment, where the test cases can be executed quickly and easily. On the other hand, the build server or the pipeline, where the software is built for further purposes. For both variants there are a lot of tools, scripts and best practices available nowadays.
Since unit tests always focus on only one unit, the question is: what happens to the rest, e.g. the calling or the called classes. These are typically replaced by test drivers and stubs that simulate the necessary calls and responses.
Due to the modularity and smallness of the unit tests, the handling of test data is usually easy to implement. These only have to cover a certain aspect and can therefore be easily built or deleted. They are either located directly with the test cases or in a common repository. A test data generator also makes sense in larger projects.
Test automation of the unit test
Test automation is a no-brainer in unit tests. There are now frameworks for all common programming languages that support both the creation and execution of unit tests. And this both in the development environments and in the build pipelines.
Unit tests in agile projects
Unit tests are necessary in all project types and are now also well represented. In agile projects, they have also enjoyed a high status from the beginning. Due to the ongoing refactoring and the short cycles in which the software is built, they are essential here as feedback.
In projects, I always encounter three problems that need to be solved:
- No negative tests or special cases are tested. I.e. unit tests then only test whether the standard case works, but not what happens if corrupt data or parameters outside the specification are used.
- The focus is on functionality and no unit tests are written. The problem with this is that it also works well at the beginning. The software is still manageable, the complexity is low, everything runs and the development progress is brilliant. In the background, however, a technical mountain of debt builds up that is difficult to work off in retrospect.
- The necessity is not recognized. Statements like "I'm done, I just need to write more tests" already indicate that unit tests are seen as a detached task. In contrast, for high-performance teams, "done" is only when all tests have been developed. Here, a different mindset prevails, which in my experience leads to significantly better results.
unit testing in practice
A minimum level of unit testing has already arrived in practice more than the other test levels. Nevertheless, there is often potential in the implementation. Too often the focus is purely on code coverage or test case development is perceived as an annoying evil. Yet it is precisely at this level that so many quick wins can be made for the success of the project.