SFML community forums
Help => System => Topic started by: kidchameleon on August 15, 2011, 12:35:00 am
-
Hi All,
After using version 1.6 for a while I move my sprites similar to how the tutorial suggested:
const float Speed = 50.f;
float Left = 0.f;
float Top = 0.f;
while (App.IsOpened())
{
if (App.GetInput().IsKeyDown(sf::Key::Left)) Left -= Speed * App.GetFrameTime();
if (App.GetInput().IsKeyDown(sf::Key::Right)) Left += Speed * App.GetFrameTime();
if (App.GetInput().IsKeyDown(sf::Key::Up)) Top -= Speed * App.GetFrameTime();
if (App.GetInput().IsKeyDown(sf::Key::Down)) Top += Speed * App.GetFrameTime();
}
Now using the latest snapshot, I realise that time is handeled with UInt32. How would the above code work in 2.0? I have found that my games timing is gone very wrong since moving to 2.0.
Thanks for reading!
-
To get seconds, divide the value returned by GetFrameTime() by 1000.f (notice the floating point type).
And with the latest SFML 2 revision, you will also have to change the input handling, as there is no more sf::Key namespace or sf::Input class.
-
Thanks Nexus,
I have no problem with input or anything like that. Its just that im finding it very hard to get my head around the change from float to Uint32. I know pretty much nothing about this type of variable. Can a UINt32 have minus values? And can it have a decimal point?
Thanks for your help
-
Thanks Nexus,
I have no problem with input or anything like that. Its just that im finding it very hard to get my head around the change from float to Uint32. I know pretty much nothing about this type of variable. Can a UINt32 have minus values? And can it have a decimal point?
Thanks for your help
Uint32 is short for unsigned 32-bit integer.
Unisigned means it can't be negative. Only positive numbers allowed.
Integer means whole numbers only. No decimals.
32-bit is just the size.
-
Uint32 is short for unsigned 32-bit integer.
Unisigned means it can't be negative. Only positive numbers allowed.
Integer means whole numbers only. No decimals.
32-bit is just the size.
Thanks!
So perhaps my problem is that my character is moving at speeds less than 1 (0.25) per frame. My delta time is now in uint32, so there is a problem multiplying my characters movement by delta time because one is a float and one is an integer?
Ok im going to make a minimal program and try to figure this one out. Thanks for the replys people!
-
OK so I got it working perhaps this will help for anybody else with this problem....
//time stuff
gameTime = aClock.GetElapsedTime();
seconds = gameTime / 1000;
delta = Game1.GetFrameTime() / 1000.0f;
frameTime = Game1.GetFrameTime();
-
I'm having a similiar problem, so I'll post here instead of making a new thread.
Since the change from float seconds to int ms, I get only 0 or 1 from GetFrameTime, as my game isn't very demanding.
When float seconds was used I got a more accurate number, and had no problem making my game frame independent even running at 15000 fps.
Is there any simple way to get the accuracy I had before?
I tried
GetFrameTime() / 1000.f
but it does not increase the accuracy, and I only get the values 0 and 0.001.
-
Seriously... don't let your game run at 15000 FPS ;)
-
Seriously... don't let your game run at 15000 FPS ;)
I love performance, and I always want to see how my changes impact performance, so I usually don't cap fps. :P
However, there are problems even when running on lower fps's.
For example, let's say I use the delta value for calculating my physics.
on 100 fps, the delta is 10.
on 91 fps, the delta is 11.
This is a multiplayer game, and 2 computers are connected; one with 103 fps, and one running 97.
I don't know how the value is rounded, but if it's rounded to closest whole number, this means that both computers will run the physics on a scale of 10.
The faster computer (103 fps) will run the calculations 6% more often, giving different results.
This is a huge problem for me, and I can't see a good reason to make the frametime less accurate.
Also, going only from 104 to 105 fps will increase the delta by 1, making the calculations ~10% faster.
This is assuming it rounds up to closest number, but the same still applies if the decimals just are cut off.
In any case, it is at the moment way too inaccurate.
[edit]
I don't know if i explained well enough, but i made a quick picture:
(http://ompldr.org/vYThkZg/Untitled.jpg)
The blue line is how my physics are operating at different fps's with the new ms int value.
-
I love performance, and I always want to see how my changes impact performance, so I usually don't cap fps
Such high FPS values don't mean much, so in fact you don't see anything at all.
For example, let's say I use the delta value for calculating my physics.
on 100 fps, the delta is 10.
on 91 fps, the delta is 11.
This is a multiplayer game, and 2 computers are connected; one with 103 fps, and one running 97.
Timing has to be very accurate for physics calculation, you should:
- not run them at more than 60 FPS
- use a fixed timestep, with an accumulator, to be able to compensate an get the exact timestep that you expect at every iteration
This is a huge problem for me, and I can't see a good reason to make the frametime less accurate.
It's not less accurate. Even with float numbers, SFML never ensured more than 1 ms precision, it was up to the OS. At least now it's clear and consistent across OSes that 1 ms is the common, minimum precision that you should expect.
In any case, it is at the moment way too inaccurate.
Many libraries don't go below 1 ms, you don't need more. I think you're relying too much on it. With further tests you would see that, according to the OS again, the timing is more or less reliable; it really can't be used blindly.
Instead you should ask yourself how to achieve consistant timing according to what you want to do.
-
Such high FPS values don't mean much, so in fact you don't see anything at all.
Is fps not a good representation of performance?
If my fps for example goes down by 25% after a change, I know that the change caused every frame to be rendered ~33% slower.
Timing has to be very accurate for physics calculation, you should:
- not run them at more than 60 FPS
- use a fixed timestep, with an accumulator, to be able to compensate an get the exact timestep that you expect at every iteration
You are right about this, I should probably put physics calculation in another thread.
It's not less accurate. Even with float numbers, SFML never ensured more than 1 ms precision, it was up to the OS.
I know that only 1ms precision is assured by the OS, but when something takes 3.4ms, 3.4 is very much more accurate than 3.
I played my game with no problems on XP with 50fps versus a linux user running it at 15000 fps.
At least now it's clear and consistent across OSes that 1 ms is the common, minimum precision that you should expect.
Yes, I think most people know that 1ms is the minimum precision, but not utilizing the extra precision provided by all OS's is a bad idea IMO, even if there are small inconsistencies. (and they are very small)
Many libraries don't go below 1 ms, you don't need more. I think you're relying too much on it. With further tests you would see that, according to the OS again, the timing is more or less reliable; it really can't be used blindly.
Instead you should ask yourself how to achieve consistant timing according to what you want to do.
If the inaccuracies are unnoticable at 15000 fps, I don't think they're a problem.
I prefer having things scale according to performance, so that even computers that can't run the game at more than 30fps at least will be able to play.
If I'm worried about the timer inconsistencies between OS's, I can easily take care of that by rounding the value to whole ms.
This just seems like a huge downgrade to me.
-
Such high FPS values don't mean much, so in fact you don't see anything at all.
Is fps not a good representation of performance?
If my fps for example goes down by 25% after a change, I know that the change caused every frame to be rendered ~33% slower.
If you go from rendering no images to one image, you get a MASSIVE drop in FPS. And then when you add a second image, the drop is only half of what it was for the first image. That doesn't mean that the second image rendered faster than the first.
You should be measuring frame time instead for performance.
-
If you go from rendering no images to one image, you get a MASSIVE drop in FPS. And then when you add a second image, the drop is only half of what it was for the first image. That doesn't mean that the second image rendered faster than the first.
You should be measuring frame time instead for performance.
Fps is 1/frame time, so it's easy to do the math quickly in your head.
What I'm trying to say is that the fps is actually a very precise rating of performance, assuming the game isn't multithreaded.
-
Is fps not a good representation of performance?
No, it's just a visual indicator, but nothing should be calculated/compared with it. You should just use it to say "it's fast" or "it's slow".
High FPS values are even worse, they don't mean anything. 15000 FPS means 0.067 milliseconds per frame; with such a low nmuber, anything that you'll add to your game loop (rendering one more sprite, handling a mouse move event), will make this number jump and the corresponding FPS drop a lot: what's your conclusion? Your game is much slower by rendering one sprite or handling a mouse event?
Moreover, the same thing at lower FPS (like 30) will be hardly noticeable. So depending on the base FPS you compare to, your conclusions will be different.
Fps is 1/frame time, so it's easy to do the math quickly in your head
That's the point: FPS is 1/N. It's not linear, so the higher it is, the less relevant it becomes.
I know that only 1ms precision is assured by the OS, but when something takes 3.4ms, 3.4 is very much more accurate than 3.
Sorry, I should have spoken about resolution, not accuracy. 1ms is the minimum resolution.
15000 is too high, seriously. You can't expect anything related to timing to be reliable at such small time intervals (at least on desktop computers with common OSes). Reduce this number to something acceptable, that will solve all your problems.
-
No, it's just a visual indicator, but nothing should be calculated/compared with it. You should just use it to say "it's fast" or "it's slow".
High FPS values are even worse, they don't mean anything. 15000 FPS means 0.067 milliseconds per frame; with such a low nmuber, anything that you'll add to your game loop (rendering one more sprite, handling a mouse move event), will make this number jump and the corresponding FPS drop a lot: what's your conclusion? Your game is much slower by rendering one sprite or handling a mouse event?
Moreover, the same thing at lower FPS (like 30) will be hardly noticeable. So depending on the base FPS you compare to, your conclusions will be different.
If I first have 1000fps, i know that the delta is 1ms.
If I then make a change that makes the fps drop to 900, i know that the ms is ~1.1.
The performance has decreased by 10%, which the fps shows, it has dropped by 10% too.
The same applies at lower fps's, just at a lower scale.
I really don't see the problem.
That's the point: FPS is 1/N. It's not linear, so the higher it is, the less relevant it becomes.
The ms is not linear, but the percentage is. Knowing by how many percent my performance has changed is enough for me.
15000 is too high, seriously. You can't expect anything related to timing to be reliable at such small time intervals (at least on desktop computers with common OSes). Reduce this number to something acceptable, that will solve all your problems.
That's very strange considering my results. Looking at the 15k fps game and 30 fps game side-by-side you don't see any differences in movement.
I have read around a bit though, and it seem to be recommended to have a constant timestep value.
I will try putting the physics in a new thread, running at 60 per second, and see how it goes.
-
If I first have 1000fps, i know that the delta is 1ms.
If I then make a change that makes the fps drop to 900, i know that the ms is ~1.1.
The performance has decreased by 10%, which the fps shows, it has dropped by 10% too.
The same applies at lower fps's, just at a lower scale.
I really don't see the problem.
The problem is that you compare relative values, not absolute ones.
The only relevant information is the amount of milliseconds that an operation takes to execute.
Let's say that a small operation (an event, or drawing one sprite) takes 1 ms. At 1000 FPS, this makes a huge relative difference, percentage or not. At 50 FPS, it's not significant anymore. And it's not the same percentage, although it's the exact same operation which takes the exact same amount of time.
You can't say that adding one sprite will always lower your FPS by 10%. But you can say that it will always eat 1 ms from your game loop.
-
The problem is that you compare relative values, not absolute ones.
The only relevant information is the amount of milliseconds that an operation takes to execute.
Let's say that a small operation (an event, or drawing one sprite) takes 1 ms. At 1000 FPS, this makes a huge relative difference, percentage or not. At 50 FPS, it's not significant anymore. And it's not the same percentage, although it's the exact same operation which takes the exact same amount of time.
You can't say that adding one sprite will always lower your FPS by 10%. But you can say that it will always eat 1 ms from your game loop.
As long as I know what the value is relative to, I also know how many ms the added operation takes.
In any case, lets' drop that discussion.
I'm interested though, how did SFML get the float value in the old implementation?
I guess I could make my own class that looks like it, as I still want to be able to show a fps counter in my game (with higher precision than 59fps, 63fps, 67fps that ms gives).
[edit] Nevermind, I'll check the source for the old version.
-
The sources are the same as before, there's just a cast to integer somewhere in the middle now.
-
The sources are the same as before, there's just a cast to integer somewhere in the middle now.
Oh, that made things a lot easier. Thanks for the help!