Hey all. I'm working on the timing for my game and I have a problem with sf::Clock.
My goal is to have a game-wide clock drive the logic so that I update logic every 10 milliseconds. However I want precision to 1 millisecond so that I can render between updates for smooth movement at any framerate.
Now I could very well use sf::Clock for this, but it has one major problem: it returns a float.
floats only have 23 bits of mantissa, so by my math, that gives me a little over 2 hours before it starts losing precision and my game starts behaving very strangely as a result. (2^23 / 1000 / 60 / 60 = ~2.33 hours)
It's not unreasonable to expect someone to play a game for 2 hours straight. Especially if they pause in the middle and take a break while leaving the game running in the background. So this is a serious concern of mine.
Because of this, using sf::Clock by itself is unacceptable for my needs. So what about solutions?
My first thought was to hide sf::Clock behind my own class. Something like this:
class Clock
{
private:
unsigned tick;
sf::Clock clk;
public:
Clock() : tick(0) { }
unsigned GetTick()
{
float f = clk.GetElapsedTime();
clk.Reset();
tick += static_cast<unsigned>( f * 1000 );
return tick;
}
};
While this isn't an ideal solution, it's the best one I have so far.
Is there any way I can just get a millisecond "tick count" from SFML? One that wraps at 32 bits? Wrapping isn't a problem -- it's loss of precision that's a problem.