Ok, thanks for the information, but how can the GPU do smooth scaling in realtime with no aditional processing than using a pre-scaled image?
As far as I know, smooth scaling requires the calculation of average colors and stuff.
Sorry if I'm being annoying... I'm just curious and want my engine to run on most of the computers.
I think, and Laurent can correct me on this, that the reason why is because no matter what size the image is, it undergoes two transforms when it goes through the rendering pipeline, no matter what (i.e. even if the object is scaled to 1, it still undergoes the transform).
What goes on is that in CPU space, your image has translation, rotation, and scale matrices, but they aren't applied to the image. When you render the image, you pass these three matrices to the GPU. The GPU applies these matrices, transforming the image into "World Space," so that everything is relative to the origin. Later on in the process, your GPU takes your camera/viewport's parameters and transforms everything again so that it is relative to the view's position. That's the final image that you see.
So you can see, no matter what you do to the image itself, these transformations are going to be applied anyways.