Reading large images fails because of texture size limitations. In order to load a large image, I have had to expose the ImageLoader. By doing so, I am able to read a large JPEG image and decimate it before creating a sf::Image using the LoadFromPixels function.
This is more or less planned for SFML 2.
Basically, the concept of "image" needs to be split in two separate classes:
- one that loads/saves/manipulate pixels on the CPU
- one that represents a texture on the GPU that can be used by a sprite
However this would make the API too complicated and confusing (you would have to use 3 different classes before being able to actually display an image), so I have to think more about it.
In animating the slide show, I often interpolate between transformation states. It would be easier to work directly with the transformation matrix. I see no reason not to expose GetMatrix as well as create a SetMatrix function.
Matrices are purely an implementation detail, in the future the transformations may be implemented differently.
On the user side, there are only positions, rotations and scales. Why can't you interpolate these values instead of the whole matrix?
I would also recommend some changes regarding images. Since images rarely change size, the overhead of STL vectors is not really justified. However, moving images around would be much easier if smart pointers were used to handle the allocation/deallocation of memory. In addition, if an image container also contained pointers to the start of every line, and appropriate operator[] functions were provided, then pixels could be accessed using doubly indexed notation such as: myImage[row][column].
There's no need to store pointers to rows and provide an operator[]. An operator() taking directly x and y is cleaner and more efficient. It will probably be part of the class that I talked about above.