Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Fewes

Pages: [1]
1
Hi there! I was very excited to see that support for native sRGB handling was added to the repo a few weeks back. After doing some testing however I noticed that if you use sf::RenderTexture to draw anything anywhere between sampling the texture and drawing to the window, you get terrible color banding.

Here are three images rendered with different setups to illustrate the problem. The texture drawn is a simple 8 bits/channel gradient.

1. Texture is drawn to the RenderTexture via a Sprite, then the RenderTexture is drawn to the window via a Sprite.
ContextSettings.sRgbCapable is set to false and Texture.setSrgb is set to false.



2. Texture is drawn directly to the window via a Sprite.
ContextSettings.sRgbCapable is set to true and Texture.setSrgb is set to true.



3. Texture is drawn to the RenderTexture via a Sprite, then the RenderTexture is drawn to the window via a Sprite.
ContextSettings.sRgbCapable is set to true and Texture.setSrgb is set to true.



As you can see the result suffers from banded colors, as expected of when storing gamma decoded values improperly.

I'm not that high on color space but it seems there is something missing where the RenderTexture should convert the input and output when rendering, just like the window does. I'm not sure if it can be remedied somehow, but I thought I should bring it to light.

Here's the code for the program shown above. Change the boolean at the top to toggle between using sRGB conversion or not.

#include <SFML/Graphics.hpp>

int main()
{
        bool srgb = true;

        sf::RenderWindow window;
        sf::VideoMode videoMode;
        videoMode.width = 720;
        videoMode.height = 405;
        sf::ContextSettings contextSettings;
        contextSettings.sRgbCapable = srgb;

        window.create(videoMode, "Linear Color Space Testing", sf::Style::Default, contextSettings);

        sf::Texture texture;
        texture.setSmooth(true);
        texture.setSrgb(srgb);
        texture.loadFromFile("Gradient.png");

        sf::Sprite sprite;
        sprite.setTexture(texture);

        // Make sure the sprite fills the screen
        float scale_x = (float)videoMode.width / (float)texture.getSize().x;
        float scale_y = (float)videoMode.width / (float)texture.getSize().y;
        sprite.setScale(scale_x, scale_y);

        sf::RenderTexture renderTexture;
        renderTexture.setSmooth(true);
        renderTexture.create(videoMode.width, videoMode.height);

        sf::Sprite renderTextureDrawable;
        renderTextureDrawable.setTexture(renderTexture.getTexture());

        while (window.isOpen())
        {
                // Poll events
        sf::Event event;
        while (window.pollEvent(event))
                {
            // Window closed
            if (event.type == sf::Event::Closed || (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::Key::Escape))
                        {
                window.close();
                        }
        }

                // Clear render texture
                renderTexture.clear(sf::Color(0, 0, 0, 255));
                // Draw gradient sprite
                renderTexture.draw(sprite);
                // Finished drawing to render texture
                renderTexture.display();

                // Clear window
                window.clear(sf::Color(0, 0, 0, 255));
                // Draw render texture drawable
                window.draw(renderTextureDrawable);
                // Finished drawing to the window
                window.display();

        }

    return 0;
}

And here's the gradient texture:


2
Graphics / Storing data in RenderTexture alpha channel
« on: June 05, 2015, 12:49:24 pm »
Hi,

I've got some code set up for drawing to a RenderTexture instead of directly to the window to be able to run screen based shaders over the "world". This RenderTexture could be thought of as a scene buffer and in my case I do not need it to be transparent since all I do is draw it to the window in the end. This frees up the alpha channel of the RenderTexture which is great because I can use that to store a depth buffer in the same draw call, or so I thought.

Turns out that it doesn't behave the way you would expect, and somehow when I supply the RenderTexture to a shader the alpha channel is completely white and the alpha I set when drawing to it seems to be taken into account when sampling the RGB channels.

Here's the shader I use to write sprites to the buffer:
uniform sampler2D texture;     // Sprite texture
uniform sampler2D rt_scene;    // Scene render texture, which is what the shader is drawing to. I pass this in here so I can mix without setting the alpha
uniform float r_depth;         // Fixed depth value for every sprite
void main()
{
    vec4 color = texture2D( texture, gl_TexCoord[0].xy );
    vec4 sceneBuffer = texture2D( rt_scene, gl_TexCoord[0].zw ); // gl_TexCoord[0].zw are screen space texture coords
   
    gl_FragColor.rgb = mix( sceneBuffer.rgb, color.rgb, color.a );
    gl_FragColor.a = mix( sceneBuffer.a, r_depth, color.a );
}

Then when I draw the Render Texture to the window I also have a custom shader and do:
vec3 color = texture2D( texture, gl_TexCoord[0].xy ).rgb;
gl_FragColor.rgb = color;
gl_FragColor.a = 1;
 

And here's an image that shows the problem (the contents of the RenderTexture):


I've done some testing and it'd be my guess that the problem lies outside of the GLSL code but I'm not very familiar with things like OpenGL flags and the like, so I come here hoping someone might have an idea of what to do. Fixing this would bring about a pretty significant performance boost (and order-independent draw sorting(!)) so I would certainly appreciate it!

Thanks

3
Hi all!

I'm working on a 2D lighting system for sprites which runs on GLSL shaders. Currently the lighting itself is very fast as it's done almost entirely in the fragment shader however I have run into a bottleneck which is the way I am creating the input mask the shader works off.
Basically what I am doing is drawing every sprite I want to put in the mask one extra time by setting its color to black before drawing it to the sf::RenderTexture. My solution for rendering to different channels is a bit hacky as I couldn't figure out how to do it any other way (I started looking at a custom blend mode but didn't manage to create anything that worked) and as such I have one RenderTexture for every channel(!) which is then combined into a single RenderTexture using BlendAdd. The obvious problem with this being that there are quite a lot of draw calls going into screen sized RenderTextures which quickly stack up depending on how many sprites I put in spriteVector.

Now I realize this might not even be possible but if anyone could shed some light on anything I could do to gain performance that would be greatly appreciated. Thread title implies a certain solution but I'm open for any suggestions as I probably can make do with a lot of different things inside the shader (heck I'd even take a 1-bit single channel mask at this point...).
My gut feeling tells me I should go lower level and I started looking into stencil and depth buffers in OpenGL but I feel like I misunderstand how those tie into a 2D pipeline. Should have spent all that time writing shaders learning C++, I suppose...

Anyway, here's my code for drawing up the mask (the MaskedSprite class is just a wrapper for an sf::Sprite containing some extra values):
        // Clear mask channels
        rt_sceneMask_red.clear(sf::Color::White);
        rt_sceneMask_green.clear(sf::Color::White);
        rt_sceneMask_blue.clear(sf::Color::White);
        rt_sceneBuffer.clear(sf::Color(30, 30, 30));
       
        rt_sceneBuffer.draw(s_background);
        // Sprite masking
        for (std::vector<LightDemo::MaskedSprite*>::const_iterator it = spriteVector.begin(); it < spriteVector.end(); it++) {
                // Get sprite pointer
                sf::Sprite* s_ptr = (*it)->getSprite();
                // Save color so we can restore it at end
                sf::Color colorTemp = s_ptr->getColor();
                // Set color to black for masking
                s_ptr->setColor(sf::Color::Black);
                // Draw to red mask (light rim 1st pass, shadows, SSAO)
                if ((*it)->drawToRed())
                        rt_sceneMask_red.draw(*s_ptr);
                // Draw to green mask (light rim 2nd pass, shadows, SSAO)
                if ((*it)->drawToGreen())
                        rt_sceneMask_green.draw(*s_ptr);
                // Draw to blue mask (light sprite blocking)
                if ((*it)->drawToBlue())
                        rt_sceneMask_blue.draw(*s_ptr);
                // Restore color before drawing to scene buffer
                s_ptr->setColor(colorTemp);
                // Draw to scene buffer
                if ((*it)->drawToScene())
                        rt_sceneBuffer.draw(*s_ptr);
        }
        rt_sceneMask_red.display();
        rt_sceneMask_green.display();
        rt_sceneMask_blue.display();
        rt_sceneBuffer.display();
       
        // Combine masks
        rt_sceneMask_RGB.clear(sf::Color::Black);
        rt_sceneMask_RGB.draw(s_sceneMask_red, sf::BlendAdd);
        rt_sceneMask_RGB.draw(s_sceneMask_green, sf::BlendAdd);
        rt_sceneMask_RGB.draw(s_sceneMask_blue, sf::BlendAdd);
        rt_sceneMask_RGB.display();
 

And here's what the lighting and mask looks like:




Pages: [1]
anything