I am not entirely sure if this is possible. Looked at OpenGL documentation and in the source of SFML but couldn't really find it anywhere.
What I'm talking about is specifying the target texture format. Like for instance the format of a normal render texture is R8G8B8A8 unsigned normalized float(0-1 range). I think it should be that as well in OpenGL. And for a HDR texture it is R16G16B16A16 float(single precision range). And so on. There are about 20 different possible formats and they all have their unique uses and are both the only way to achieve some post processing effects to also being optimizations.
I don't feel it's out of range for SFML to implement it as fragment shading is becoming more and more used in 2D games. But since I couldn't find anything in SFML's source that actually set a format I have no idea if it is possible. But it would be very weird if you couldn't or I might just be blind or looking at something wrong. Or maybe OpenGL doesn't support it until OpenGL 3.0?
The interface wouldn't be that alien, would be much like how windows does with style I guess.
class RenderTexture
{
public:
void Create( unsigned int aWidth, unsigned int aHeight, sf::Uint32 aFormat = sf::Format::Default, bool aCreateDepthFlag );
};
The only real difference for interface is handled in GLSL. The return value in the fragment shader is different but that's part of OpenGL and not SFML.
Use-cases for this feature would be(And yes I am thinking out of a 2D perspective):
- High-Dynamic-Range colors
- Shadow buffers
- Glow effect(Not bloom but artist controlled glow)
- GPU-based custom calculations(Like average luminance)
- Fog of war(Could count to above)
Those are the uses that I personally have used this feature for and there might be things I have missed. Some of these can be calculated using the default texture format but then we have like HDR colors and glow effect that requires a more advanced format.
Edit: Actually found where it is done:
GLCheck(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, myTextureWidth, myTextureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL));
So I guess for something like HDR texture it would be GL_RGBA16 and GL_FLOAT right instead?