Just out of interest: Weren't the floats theoretically more precise than a millisecond? On the counterpart, they are less accurate for long times.
So you sacrifice the high-performance timers
You could have had both advantages with double, couldn't you?
By the way, I don't think an additional #ifdef _MSC_VER is necessary for the 64 bit types, because MSVC++ supports signed/unsigned long long since 2005
50 days of insignificantly less accurate instead of approx 2 hours of insignificantly more accurate? Sounds good to me!
I used to play WoW. Never got addicted. Never saw what was addicting about it. In fact, it bored me.Quote from: "OniLink10"50 days of insignificantly less accurate instead of approx 2 hours of insignificantly more accurate? Sounds good to me!
Ha, you never played WOW, EVERCRACK, or BC2 :) Sleep what's that?
Quote from: "Mars_999"I used to play WoW. Never got addicted. Never saw what was addicting about it. In fact, it bored me.Quote from: "OniLink10"50 days of insignificantly less accurate instead of approx 2 hours of insignificantly more accurate? Sounds good to me!
Ha, you never played WOW, EVERCRACK, or BC2 :) Sleep what's that?
Your point is?
Quote from: "Mars_999"I used to play WoW. Never got addicted. Never saw what was addicting about it. In fact, it bored me.Quote from: "OniLink10"50 days of insignificantly less accurate instead of approx 2 hours of insignificantly more accurate? Sounds good to me!
Ha, you never played WOW, EVERCRACK, or BC2 :) Sleep what's that?
Your point is?
Couldn't we just overload the function? And keep the float version also?You cannot overload functions if their signature (name and parameters) is the same. At sf::Clock::GetElapsedTime(), only the return value was changed, not the signature.
Quote from: "Mars_999"Couldn't we just overload the function? And keep the float version also?You cannot overload functions if their signature (name and parameters) is the same. At sf::Clock::GetElapsedTime(), only the return value was changed, not the signature.
The only option would be a function with another name, for example GetElapsedSeconds() and GetElapsedMilliseconds().
My bad for not looking at the function ahead of time. I assumed we sent in a parameter... How about a template class then....
Or just use uint64_t to begin with.
Couldn't we just overload the function? And keep the float version also? Not sure what harm that would do....
The float version is obsolete really. The simple conversion from Uint32 milliseconds to float seconds comes for free basicallyThe conversion is still explicitly necessary; forgetting it may lead to warnings, while the code still compiles. But it is of course possible. My point is, the same code is now more complicated on user side, either because
There shouldn't be too much of a problem since the counting begins with the initialization of the clock, but I'd still suggest going with the flow and using a 64-bit value here.I don't think a 64 bit type brings relevant advantages. To "go with the flow", Laurent should take std::time_t and not an arbitrary integer type. But in my opinion, sf::Uint32 is fine for milliseconds.
ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution, or encoding for it. Also unspecified are the meanings of arithmetic operations applied to time values.
Unix and POSIX-compliant systems implement time_t as an integer or real-floating type [1] (typically a 32- or 64-bit integer) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds).
There is the omnipresent /1000.f to get seconds or
Isn't that what macros were invented for?
// to keep this simple, it does not handle wrapping gracefully
// do not run for more than 49 days
// (although because it gives you a float, it will become useless earlier
// than that anyway)
class ClockFloat
{
sf::Clock clk;
public:
inline void Reset()
{
clk.Reset();
}
inline float GetElapsedTime()
{
return clk.GetElapsedTime() / 1000.0f;
}
};
This change is a clear and useless regression.If you read the arguments FOR using integers in the thread, you would see that that "extra precision" you need wouldn't exist even if floats were used. OSes only guarantee precision to the millisecond level.
Use double, keep the API compatibility and keep it precise down to microsecond level for 300 years. Time precision greater than 1 ms is important for various things including audio clock sync, profiling and such. 1 ms jitter is apparent on smooth 60 FPS animation even if not particularly disturbing.
There are highly precise timers available on all major platforms, so there is also no reason to only offer 1 ms precision. Double precision floating-point is lightning fast to calculate with and the extra four bytes for a clock memory footprint cannot be significant either (especially since floats are promoted to double anyway when put into registers or passed to functions).
There will be a problem indeed, since 1.6 can't be returned (it will be 1 by the way, C++ truncates when it converts float to int).
will GetFrameTime try to make up for previous errors / inaccurate measures?
What if my loop lasted shorter than 1 ms (0.9 ms), would it be truncated to 0 ms?
In that case I would totally drop the frame timer and go for a global timer. Then the differences would be corrected, right?
What do you mean that OS guarantees 1ms as the smallest measurement? If so, then why was the float displaying non-round FPS values, like 1700 FPS?
What if my loop lasted shorter than 1 ms (0.9 ms), would it be truncated to 0 ms?
This is most likely what is happening with my game, specially during window drag/resize when display is ignored, truncations make my game run a little bit faster :(. Any suggestion?
I see two different uses of clocks: longterm and precision measurements, sometimes I wish there were one clock for each.
timestamp[frame_number] = gettime()
time_elapsed = timestamp[current_frame] - timestamp[current_frame - 1]
milliseconds are too unprecise for some measurements, and very cumbersome for everyday work (seconds are usually easier to imagine and to work with) ;)
BTW, the clock mechanism are easy to change or use. And the wiki exist.Yes, but changing SFML itself is not a good idea, I prefer non-intrusive approaches. And as mentioned, you currently need to duplicate the whole SFML code if you want to measure microseconds in a platform-independent way.
Quote from: "Tronic"This change is a clear and useless regression.If you read the arguments FOR using integers in the thread, you would see that that "extra precision" you need wouldn't exist even if floats were used. OSes only guarantee precision to the millisecond level.
Use double, keep the API compatibility and keep it precise down to microsecond level for 300 years. Time precision greater than 1 ms is important for various things including audio clock sync, profiling and such. 1 ms jitter is apparent on smooth 60 FPS animation even if not particularly disturbing.
There are highly precise timers available on all major platforms, so there is also no reason to only offer 1 ms precision. Double precision floating-point is lightning fast to calculate with and the extra four bytes for a clock memory footprint cannot be significant either (especially since floats are promoted to double anyway when put into registers or passed to functions).
class TimeSpan {
//Whatever;
public:
//constructors come here
Uint64 GetNanoseconds() const;
Uint32 GetMilliseconds() const;
double GetSeconds() const;
};
How about some simple-to-use flexible TimeSpan class?
- provide double seconds, but then some people will complain about the lack of exact timestamps, as well as the need to cast to float every calculation involving time (everything gets promoted to double when one of the operands is a double, but SFML uses float everywhere else)
- provide everything, but then I will complain about the bloated API
template <class T = Uint32>
class Clock;
template <>
class Clock<Uint32> {/* milisec implementation */};
template <>
class Clock<Uint64> {/* nanosec implementation */};
template <>
class Clock<double> {/* double implementation */};
// A float implementation ???
template <>
class Clock<float> {/* float implementation */};
float GetElapsedSeconds(); // or just GetElapsedTime()
sf::Uint64 GetElapsedNanoSeconds();
Are there interfaces for which float is too unprecise in real life situations?
Yes, the problem is that they become less accurate too quickly (apparently it becomes a problem after 2.33 hours).
- provide double seconds, but then some people will complain about the lack of exact timestamps, as well as the need to cast to float every calculation involving time (everything gets promoted to double when one of the operands is a double, but SFML uses float everywhere else)
TimeSpan ts = foo.EllapsedTime();
double deltaTime = ts.AsDoubleSecunds();
update(deltaTime);
render(deltaTime);
the most drawback of doubles comes from the double memory usage
I would personally use unsigned 64 bit integer with nanoseconds so SFML can use the best available timer on the platform. It can be simply converted to milliseconds or the desired floating point value if the programmer want something else.
A TimeSpan class could be made that internally use nanoseconds and provides methods to retrieve it in various other formats. The programmer doesn't even need to carry this class around, they just extract the needed value and propagate that in they code.
final int C = 10;
long[] res = new long[C];
int i = 0;
long tp = System.nanoTime();
while (i < C)
{
long tn = System.nanoTime();
if (tn != tp)
{
long d = tn - tp;
tp = tn;
res[i] = tn;
i++;
}
}
for (i = 0; i < C; i++)
System.out.println(Integer.toString(i) + " = " + res[i]);
0 = 181327039326786
1 = 181327039328598
2 = 181327039329503
3 = 181327039330409
4 = 181327039331315
5 = 181327039332221
6 = 181327039332674
7 = 181327039333580
8 = 181327039334485
9 = 181327039335391
Well micro is still a thousand times better then milis! :D
The time API will be updated soon, with a big surprise :lol:We can measure time down to the planck scale?!
We can measure time down to the planck scale?!
Use of std::chrono-like code maybe?
float is not precise enough for timing.did you look at the source code ? internally it's a Uint64 (so microseconds) and it's cast to float when asked.