It's only when they are rendered that each which pixel is used to display it is decided.That's what I wanted to ask, how will it be decided when it is rendered?
Of course it is. Why do you think it could be different? Did you have any problem?OK, cool! Well, I did not have any problems, but here is the thought which troubled me. Suppose we take a factor that is, say, not very nice - say PI. Then, I suppose, somewhere internally, sprite width is multiplied by PI, yielding another float. Now, this float needs to be integer (since it shows amount of pixels). Would not it make sense to ceil this float value?
I suppose, somewhere internally, sprite width is multiplied by PI, yielding another float.Right.
Now, this float needs to be integer (since it shows amount of pixels).Not actually true. The float "width" is given directly to OpenGL and it decides how many pixels it wants to use depending on its current state (e.g. its view).
But apparently rounding is used, but that was the source of my confusion.It's actually not. At least not until it's rasterized by OpenGL. At that point, it decides on how the pixels look, which also takes into consideration anti-aliasing so that a width of 10.5 would be 10 pixels and a semi-transparent pixel.
The float "width" is given directly to OpenGL and it decides how many pixels it wants to use depending on its current state (e.g. its view)But does OpenGL decide what will be in sprite.getLocalBounds().width?
But does OpenGL decide what will be in sprite.getLocalBounds().width?No, sprite.getLocalBounds().width simply returns sprite.getTextureRect().width (don't hesitate to look at the source code). And the texture rect is set by you, so you have everything under control.