One of the roadblocks on the way to providing newer, more "modern" and less "legacy" rendering implementations is the way shaders would be used.
The problem is that Khronos decided to make a clean cut when jumping from the legacy to the modern pipeline in regards to shader usage. There is no backwards compatible way to cover both implementations.
For those who don't understand what I mean, let me demonstrate.
This is what legacy GLSL (the kind SFML currently uses) would look like:
uniform float wave_phase;
uniform vec2 wave_amplitude;
void main()
{
vec4 vertex = gl_Vertex;
vertex.x += cos(gl_Vertex.y * 0.02 + wave_phase * 3.8) * wave_amplitude.x
+ sin(gl_Vertex.y * 0.02 + wave_phase * 6.3) * wave_amplitude.x * 0.3;
vertex.y += sin(gl_Vertex.x * 0.02 + wave_phase * 2.4) * wave_amplitude.y
+ cos(gl_Vertex.x * 0.02 + wave_phase * 5.2) * wave_amplitude.y * 0.3;
gl_Position = gl_ModelViewProjectionMatrix * vertex;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
This is what the non-legacy variant would look like:
#version 130
uniform mat4 sf_ModelViewMatrix;
uniform mat4 sf_ProjectionMatrix;
uniform mat4 sf_TextureMatrix;
in vec4 sf_Vertex;
in vec4 sf_Color;
in vec4 sf_MultiTexCoord;
out vec4 sf_FrontColor;
out vec2 sf_TexCoord;
uniform float wave_phase;
uniform vec2 wave_amplitude;
void main()
{
vec4 vertex = sf_Vertex;
vertex.x += cos(sf_Vertex.y * 0.02 + wave_phase * 3.8) * wave_amplitude.x
+ sin(sf_Vertex.y * 0.02 + wave_phase * 6.3) * wave_amplitude.x * 0.3;
vertex.y += sin(sf_Vertex.x * 0.02 + wave_phase * 2.4) * wave_amplitude.y
+ cos(sf_Vertex.x * 0.02 + wave_phase * 5.2) * wave_amplitude.y * 0.3;
gl_Position = sf_ProjectionMatrix * sf_ModelViewMatrix * vertex;
sf_TexCoord = sf_TextureMatrix * sf_MultiTexCoord;
sf_FrontColor = sf_Color;
}
Almost all the built-in variables have been removed. The only one remaining which is relevant to us is gl_Position.
Because all the built-ins were removed, this means that SFML would have to explicitly transfer the uniform data as well as specify the attribute bindings using the in and out attribute names as can be seen in the second snippet. For demonstration purposes, I just replaced the gl_ prefix with sf_ to make the changes clearer and obvious that they are no longer GL built-ins.
The question now is:
Should SFML just provide a fixed table of legacy replacements so that the user can write GLSL like they used to even using the non-legacy pipeline? This would force the user to follow the given convention and if they don't, errors will occur and drawing will simply not produce the expected output.
Or should SFML allow the user to specify what they are going to name all of the required uniforms and attributes in their GLSL using some global/static function once at the start of their application? If the user doesn't follow what they specify, drawing will also not produce the expected output.
Maybe a mixed solution containing both of the options is the best. SFML would use the pre-specified table of replacements as default values and allow the user to override entries as they wish.
The next question would be: How does SFML guarantee that the GLSL the user provides will work no matter what the target platform supports? The problem is that the user might not know whether the target platform can support non-legacy shaders beforehand. In order to handle both scenarios, the user would have to specify 2 sets of shader source: 1 for the legacy shader implementation and 1 for the non-legacy shader implementation. They could always opt to reject using the non-legacy implementation and stick to legacy for maximum compatibility and not having to write GLSL twice, however this prevents them from using the non-legacy pipeline which might be the better choice if available. On the mobile platforms, this isn't an issue since legacy shaders don't exist.
Once these questions are answered, SFML might be able to move on and provide a few newer rendering backends and finally offer support for shaders on mobile platforms.
So, what would you, the community, like to see/use?
Setting the bindings via the Shader class is all nice and good, however, the actual binding takes place in the RenderTarget. Therefore, either the RenderTarget has to check if custom binding is required by inspecting the Shader objects every draw, or the bindings become part of the RenderTarget instead. The latter variant would make the inner workings of the RenderTarget transparent to the users who don't care one bit about shaders in their application.
To clarify, is the concern here having a public function in the RenderTarget class for setting these shader param bindings?
If so, one way around that is to make those functions private (so not exposed in the public API), have the RenderTarget friend the Shader class, make a public API in the Shader class to set the bindings, which then calls the private function on the RenderTarget.
... A little convoluted, but it at least keeps the public API for the RenderTarget clean of any shader-related stuff.
(edit) For this to work, the binding vars would need to be static, and shared across all RenderTargets, but that's likely what you'd want anyway.
How does the user determine/control which rendering backend SFML chooses to use? If multiple are available, it would be nice if the newest were to be chosen, but there might be times when the user wants to suppress selection of newer backends for whatever reason.
I would think an sf::ContextSettings variable would do the trick.
Perhaps something like
// in sf::ContextSettings:
unsigned int maxMajorVersion
unsigned int maxMinorVersion
or alternatively
bool useHighestAvailableGLVersion
To clarify, is the concern here having a public function in the RenderTarget class for setting these shader param bindings?
Neither nor really. ;D
Conceptually, yes... Shader "stuff" should go into the Shader class. However, this is less about setting attributes of the shaders as much as it is about simply defining a convention that is to be used between SFML and the developer. I also wouldn't go as far as to store the attribute names in non-static storage. As I said, it's about defining a convention, and typically, there should really only be a single convention in use at a time. This speaks strongly in favour of having static functions/variables to manage the bindings.
As for the location of these static methods/variables, I am really indifferent (this is where the community comes in ;)). Either way, the RenderTarget will have to access this data and bind the attributes accordingly. From a performance stand point there is absolutely no difference between having them in RenderTarget or Shader. Semantically, and from an OpenGL perspective, the binding (as its name implies) is the interface between the data source (in our case the RenderTarget) and the shader program (which needs to know which attributes to assign the input data streams to). As such, both sides would be equally valid locations for binding data. One thing we must not forget is that VAOs (which are required in modern OpenGL) will need to be part of the RenderTargets since they are not shareable. Having the static binding data in the RenderTarget would ease maintenance of the VAOs without having to introduce coupling between RenderTarget and Shader classes.
I would think an sf::ContextSettings variable would do the trick.
Perhaps something like
// in sf::ContextSettings:
unsigned int maxMajorVersion
unsigned int maxMinorVersion
or alternatively
bool useHighestAvailableGLVersion
The OpenGL specification allows implementations to give us any version it wants that satisfies the requirements imposed by the version we request. SFML can request version 2.1 and will get a 4.5 compatibility context. On the other hand, if SFML requests 2.1 and the implementation doesn't support compatibility contexts (such as OS X and Mesa3D), it will give us a 3.0 context since 3.1+ would be core only.
The version numbers the user specifies in ContextSettings are merely a minimum required version (which is why the check that is built-in only warns when the version returned might be incompatible i.e. lower or core when compatibility is requested). There is no notion of "maximum version".
What I was referring to however, was not the OpenGL context version. SFML will always request the highest context version that it can if the user does not care (the implementation already does this anyway as I mentioned above). Based on the version of the created context, SFML will compile a list of which rendering implementations are supported on the user's system. From the set of supported implementations, SFML will have to decide which one to use. It is this decision that the user should be able to influence somehow, since the decision will have implications on what kind of GLSL is expected/supported. If the user doesn't influence the decision in any way, SFML would just go for the newest/optimal implementation that can deliver the best performance on the given hardware.