Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: SFML Test Strategy  (Read 5672 times)

0 Members and 1 Guest are viewing this topic.


  • Moderator
  • Hero Member
  • *****
  • Posts: 9944
    • View Profile
    • development blog
    • Email
SFML Test Strategy
« on: August 15, 2018, 11:15:40 pm »
It's no secret that automated tests have many benefits and after many years of discussions and one stale git branch, it's time to finally dive in.

There are many different kinds of automated tests. We will limit ourselves to Unit Tests and Integration Tests for now. The following should give a brief overview of what kinds test we expect and how to approach testing.

Unit Tests

Unit tests should only test one specific unit independently from any other unit as much as possible. This means that we're not testing what effect one unit has on another unit, but simply check the functionality of that one unit.
For SFML a unit is most of the time simply a class. For example sf::Rect<T> is a unit, sf::Vector2<T> is a unit, but the unit tests for sf::Rect<T> shouldn't check that the properties of sf::Vector2<T> are set correctly, because the sf::Rect<T> unit tests, should only test the sf::Rect<T> interface.
The point is, if every class has its own unit tests, we can be certain that every class behaves exactly the way we expect them to, so they don't need to be retested.

The foundation for unit tests should be introduced with SFML 2.6.0.

Integration Tests

Integration tests on the other hand do intentionally connect multiple units to ensure that in combination they behave as expected.
For SFML this can often mean, that we let one unit pass through other units and finally transform the result into a format that can be asserted. For example the rendering of a shape would go through a render texture and then convert to an image, which can be compared pixel by pixel with a test data image.

The proper setup and intent of integration tests still needs to be determined, but some things will certainly build on top of the unit test setup.

What do we test?

Tests exist to assert that the promises we make by providing a public API, actually hold true.
As such unit as well as integration tests should only ever test the public interface and ensure that the API does what the documentation says.

When do we write tests?

For every new feature there should be multiple new unit tests, testing positive as well as negative test cases.
For every bugfix there should be at least one new integration test, that covers the bug and ensures that there's no regression.
Official FAQ: https://www.sfml-dev.org/faq.php
Nightly Builds: https://www.nightlybuilds.ch/
Dev Blog: https://dev.my-gate.net/
Thor: http://www.bromeon.ch/libraries/thor/


  • Moderator
  • Hero Member
  • *****
  • Posts: 6259
  • Thor Developer
    • View Profile
    • Bromeon
Re: SFML Test Strategy
« Reply #1 on: August 19, 2018, 09:55:38 pm »
Time passes crazy fast, I remember that stale branch vividly... Good that we finally get back to this topic, and cool to see that you already started updating it :)

Your post is a really good summary about testing. Over the years, a few things that I personally learned:

1. If tests are cumbersome to write, people will write less of them.
With Catch, we already went for a syntactically nice solution. I would even suggest to use simple test styles ("do-expect") rather than more verbose BDD ("given-when-then") ones, to reduce boilerplate to a minimum. A keyword-based test case description often helps a lot, let's not enforce long poet-like writing.

  • GIVEN a sf::Sprite / WHEN its position is set / THEN getting its position returns passed value
  • sf::Sprite::getPosition/setPosition are symmetric
It's not just about writing, but also reading and knowing immediately what is being tested. Even if "when/then" phrases make sense, they can be part of the description, not the code structure.

2. Tests should not dictate an overly complex design
In some programming environments, Dependency Injection, Mocks and Spies are considered best-practices in unit testing. While they definitely help in some cases, they may increase exposing implementation details and hinder isolation of functionality.

Particularly limiting is fine-grained testing on the level of precise function calls instead of behavior. Or more generally, testing the implementation and not the API (e.g. by injecting an "update observer" just for testability). A very good indicator for such a design flaw when test cases have to be adapted as soon as the implementation changes.

Example: instead of
When Sprite::setTexture() is called with reset=true, then Sprite::setTextureRect() must be invoked with the texture size as argument.
this could be:
When Sprite::setTexture() is called with reset=true, then Sprite::getTextureRect() must be return the texture's size.

Here's a longer article on the topic.

3. A unit is not always a class
When thinking of unit tests, a lot of people make a 1:1 relationship between test cases and the classes in their code. This may be applicable, but there are good reasons why a unit may span multiple classes, or in C++ even global functions. This is often the case with classes that exist only as data containers, combined with behavioral classes that use the data classes.

For example, sf::Event is very uninteresting on its own, the whole behavior is implemented in sf::Window.

For I/O components, an option can be to write integration tests directly, and skip unit tests.

4. Test critical components first
Terms like "coverage" make people think that the more functionality is tested, the better. Reality however is that resources are limited, and time spent on tests will not flow into bugfixes and features. On the other hand, tests save time in the future if they prevent bugs.

A pragmatic approach is to write tests when the time to write them is less than the time to fix the bugs they are going to cause. Of course, this requires estimation and often, "obvious" functionality can lead to the most sneaky bugs. However, a good start is to begin writing tests for functionality that is "critical". This can mean:
  • The implementation is not straightforward and may contain non-trivial corner cases.
  • A refactoring is possible in the future, and has a good chance of breaking the code.
  • The component depends on hardware and/or operating system, thus behavior may vary.
  • The functionality is depended on by many other components in the library, and bugs would cause considerable damage.
  • The component is relatively new and not yet battle-hardened.
The last point is controversial, but exposing unchanged functionality over many years to many people does increase the likelyhood of that functionality working correctly. Of course, this should only be relevant when it comes to the decision of "should I write this test or spend the time on something else important".
Zloxx II: action platformer
Thor Library: particle systems, animations, dot products, ...
SFML Game Development: first SFML book

Elias Daler

  • Hero Member
  • *****
  • Posts: 602
    • View Profile
    • Blog
    • Email
Re: SFML Test Strategy
« Reply #2 on: August 20, 2018, 12:45:10 am »
Another nice thing to do would be to skim through the previously fixed bugs and add them to unit/integration tests if possible.

It's also worth to remember writing tests about edge cases which dictate how SFML deals with some things. For example, a test which tells if rects (0, 0, 10, 10) and (10, 0, 10, 10) intersect or not. It also serves a nice purpose of tests showing a behavior that the user should expect. Unit tests which show you what to expect when you do X and Y are very satisfying.

P.S. Totally agreed about BDD point. Let's not get verbose. This is one of the reasons I like Google Test and how tests get written with it - it's mostly just a code with different assert-like macros. Easy to write, easy to understand, easy to modify.
Tomb Painter, Re:creation dev | eliasdaler.github.io | @EliasDaler | Tomb Painter dev log


  • Full Member
  • ***
  • Posts: 169
  • Proud member of the shoe club
    • View Profile
    • Code-Concept
Re: SFML Test Strategy
« Reply #3 on: August 20, 2018, 02:30:55 pm »
It's also worth to remember writing tests about edge cases which dictate how SFML deals with some things. For example, a test which tells if rects (0, 0, 10, 10) and (10, 0, 10, 10) intersect or not. It also serves a nice purpose of tests showing a behavior that the user should expect. Unit tests which show you what to expect when you do X and Y are very satisfying.

In my experience, this kind of test also can help a lot of people that wonder how to use different part of the code. It's a direct code example, that shows some of the stuff. A lot of people uses test as an example and a helper to learn how to use libraries.

As for the BDD, I would also agree that a too verbose thing is often annoying and makes it harder to read in a lot of cases. And if we continue with the premise that some users will use the test to learn the library, it makes it harder for them too.
Code Concept
Rosme on IRC/Discord