Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Sakarah

Pages: [1]
1
I am necro-bumping this thread because the issue is still not solved and I have recently investigated it further since it prevents me from using sRGB in my project.

The problem is that once converted from sRGB space, colors take more space than before to be kept accurately.
Actually to keep a smooth gradient in linear space that can be converted back to the full sRGB scale, one would need to use more bits per pixel (as a simple approximation we can think of sRGB encoded data as pixel values between 0 and 1 raised to the power 2.2, we can then convince ourselves that if we do not increase the number of decimals stored, the conversion function will not be injective).

Currently, sf::Texture's are either represented as 8 bit linear RGBA (when isSrgb is false, always the case for a rendered Texture) or as 8 bit sRGB + 8 bit alpha (when isSrgb is true).
This means that after decoding a 8 bit sRGB color we want to store it in a linear 8 bit space, and it obviously fails.

It seems to me that all solutions for using sRGB decoding along RenderTexture rely on letting advanced user choose internal formats for the sf::Texture of a sf::RenderTexture. The amount of control to actually give, and the preservation of the Texture interface with weirder formats is the real question.

A texture format choice (like current call to setSrgb) need to be done before loading pixel data or rendering.
When using multisampling, the color renderbuffer of the RenderTextureImpl must be tweaked as well, and some formats might not be available for this purpose.
The number of internal texture formats that OpenGL offers is huge, some seem very old or exotic (like the tiny GL_R3_G3_B2), and platform support for these vary a lot. For our problem, most relevant ones are:
  • GL_RGBA8: This is the current SFML default on most modern computers, it has data that is nicely represented by sf::Image. It is treated by OpenGL as linear RGB when performing operations.
  • GL_SRGB8_ALPHA8: This is the same as the previous one but before performing an operation, OpenGL decode the color from sRGB to linear space. This is currently used by SFML when isSrgb is true.
  • GL_RGBA16: This format use 16 bits for each color value. It seems that sRGB decoded values almost fit there.
  • GL_RGBA16F / GL_RGBA32F: Stores the value as (half-)floats, should also map sRGB more accurately, supported only with the GL_ARB_texture_float extension (core since OpenGL 3.0).
  • GL_RED/GL_RG/GL_RGB: Variants that store less color channels, might feel even weirder for sf::Image.

On desktop OpenGL, we can feed our internal Image pixels to any of the formats, and channels will be extended/discarded/recreated as needed. However, this means that we cannot expect copyToImage and loadFromImage to be lossless operations anymore. Moreover, it seems that OpenGL ES, does not support pixel data input in a different format than the internal representation. This is somehow not very satisfying but stems from not also supporting multiple formats in Image.

If we want to deal with the banding problem of this thread, I see four main options to be considered:
  • Option full sRGB: We consider that encoding intermediate RenderTexture in sRGB is not that bad since it only prevents textures to slowly accumulate data, as it never exceeds 8 bits per channel. Support just need to be added in RenderTexture, to set the sRGB mode to the underlying texture when requested (maybe through ContextSettings). I think I can very quickly send a PR for this option.
  • Option RenderTexture format support: We could allow RenderTexture to select the format of its underlying texture. This might be RGBA16, RGBA16F or RGBA32F. This is a more powerful option. Though it does create an inconsistency if the user calls copyToImage because the returned image will be in linear space (while loaded image supposedly were in sRGB), and might be troublesome with OpenGL ES or with old implementations that do not support float formats. The actual choice of supported format must also be decided
  • Option Texture format support: We could let the user choose the format of any Texture. Though Image stays identical. This is basically the same as the previous option but we also allow users to also change the format of loaded textures. This might feel weird since all the pixel transfer operations will still be in 8bit RGBA format. This will definitely cause trouble with OpenGL ES.
  • Option Image format support: We could go even further and allow sf::Image to have different formats, that can be given to Textures. If this is surely the most powerful option, it is also the most complex and pervasive. I that if such a change is desired it should be postponed until SFML 3 to be able to fully rework the Image API.

These options are not mutually incompatible and in my quick and dirty test it seems that all four of them would actually solve the banding problem.
My current favorite option is full sRGB for its simplicity and the internal coherency it can create. It means that all Image's will be in sRGB color space, and maybe enabling us to have a simple compile time option to make sRGB the default for a given project.

2
Feature requests / Max and Min blending mode
« on: March 07, 2021, 03:46:37 pm »
The enumeration sf::BlendMode::Equation contains only three constants. But as far as I know, OpenGL implementations that support Subtract also support Max and Min.
An old thread said that there were no use for Min and Max BlendMode equations. However, I strongly disagree with that statement as my current project rely on the Max blending mode.

Indeed, I want to make an 2D infiltration game, where some agents (say cameras) have a viewing score for each pixel in their range. When two or more of these cameras view the same zone, the viewing score of a pixel is the maximum score of each camera.
I want the player to visualize these viewing zones, and for that I plan to draw a viewing zone for each camera and rely on the blending mode to actually retain the maximum value.

Speaking about Min, if I hypothetically wanted to instead draw safeness of a given pixel I would like each camera to locally decrease safeness of its neighboring pixels, and remember the minimal safeness value instead of the maximum.

Obviously, I can fork SFML for my current project to fulfill my needs, but I really think that Max and Min blending modes can be useful for others, and their implementation in the library is trivial (around 15 LoC).

If I submit a pull request to add Min and Max blending mode, would you accept it ?

Pages: [1]
anything