Time passes crazy fast, I remember
that stale branch vividly... Good that we finally get back to this topic, and cool to see that you already started updating it
Your post is a really good summary about testing. Over the years, a few things that I personally learned:
1. If tests are cumbersome to write, people will write less of them.With Catch, we already went for a syntactically nice solution. I would even suggest to use simple test styles ("do-expect") rather than more verbose BDD ("given-when-then") ones, to reduce boilerplate to a minimum. A keyword-based test case description often helps a lot, let's not enforce long poet-like writing.
Compare:
- GIVEN a sf::Sprite / WHEN its position is set / THEN getting its position returns passed value
- sf::Sprite::getPosition/setPosition are symmetric
It's not just about writing, but also reading and knowing immediately what is being tested. Even if "when/then" phrases make sense, they can be part of the description, not the code structure.
2. Tests should not dictate an overly complex designIn some programming environments, Dependency Injection, Mocks and Spies are considered best-practices in unit testing. While they definitely help in some cases, they may increase exposing implementation details and hinder isolation of functionality.
Particularly limiting is fine-grained testing on the level of precise function calls instead of behavior. Or more generally, testing the implementation and not the API (e.g. by injecting an "update observer" just for testability). A very good indicator for such a design flaw when test cases have to be adapted as soon as the implementation changes.
Example: instead of
When Sprite::setTexture() is called with reset=true, then Sprite::setTextureRect() must be invoked with the texture size as argument.this could be:
When Sprite::setTexture() is called with reset=true, then Sprite::getTextureRect() must be return the texture's size.
Here's a longer article on the topic.
3. A unit is not always a classWhen thinking of unit tests, a lot of people make a 1:1 relationship between test cases and the classes in their code. This may be applicable, but there are good reasons why a unit may span multiple classes, or in C++ even global functions. This is often the case with classes that exist only as data containers, combined with behavioral classes that use the data classes.
For example,
sf::Event is very uninteresting on its own, the whole behavior is implemented in
sf::Window.
For I/O components, an option can be to write integration tests directly, and skip unit tests.
4. Test critical components firstTerms like "coverage" make people think that the more functionality is tested, the better. Reality however is that resources are limited, and time spent on tests will not flow into bugfixes and features. On the other hand, tests save time in the future if they prevent bugs.
A pragmatic approach is to write tests when the time to write them is less than the time to fix the bugs they are going to cause. Of course, this requires estimation and often, "obvious" functionality can lead to the most sneaky bugs. However, a good start is to begin writing tests for functionality that is "critical". This can mean:
- The implementation is not straightforward and may contain non-trivial corner cases.
- A refactoring is possible in the future, and has a good chance of breaking the code.
- The component depends on hardware and/or operating system, thus behavior may vary.
- The functionality is depended on by many other components in the library, and bugs would cause considerable damage.
- The component is relatively new and not yet battle-hardened.
The last point is controversial, but exposing unchanged functionality over many years to many people
does increase the likelyhood of that functionality working correctly. Of course, this should only be relevant when it comes to the decision of "should I write this test or spend the time on something else important".