Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: GL_MAX_TEXTURE_IMAGE_UNITS?  (Read 14812 times)

0 Members and 1 Guest are viewing this topic.

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
GL_MAX_TEXTURE_IMAGE_UNITS?
« on: April 11, 2015, 08:57:53 pm »
Hi,

From SFML 2.2, Shader.cpp:

    GLint checkMaxTextureUnits()
    {
        GLint maxUnits = 0;

        glCheck(glGetIntegerv(GL_MAX_TEXTURE_COORDS_ARB, &maxUnits));

        return maxUnits;
    }
 

This function is invoked when determining how many texture params I can bind to a shader.  Here's my question, though.  Shouldn't this function be checking GL_MAX_TEXTURE_IMAGE_UNITS instead?  I believe that number is usually higher than GL_MAX_TEXTURE_COORDS_ARB, so this function is artificially limiting the number of textures I can send to my shader. 

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #2 on: April 11, 2015, 10:01:25 pm »
Ah cool, thanks binary1248.  Nice to know you guys are one step ahead of me.  :)

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #3 on: April 11, 2015, 10:15:55 pm »
Actually, the updated github code may still be incorrect.

According to opengl.org, that value (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) gives the maximum number of textures which can be used between each of the vertex, fragment, and geometry shaders at once, i.e . the combined limit.  The number for any individual pass is less.

From the above link:
Quote
Max Texture Units

Never call

 glGetIntegerv(GL_MAX_TEXTURE_UNITS, &MaxTextureUnits);

because this is for the fixed pipeline which is deprecated now. It would return a low value such as 4.
For GL 2.0 and onwards, use the following

 glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &MaxTextureImageUnits);

The above would return a value such as 16 or 32 or above. That is the number of image samplers that your GPU supports in the fragment shader.

The following is for the vertex shader (available since GL 2.0). This might return 0 for certain GPUs.

 glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, &MaxVertexTextureImageUnits);

The following is for the geometry shader (available since GL 3.2)

 glGetIntegerv(GL_MAX_GEOMETRY_TEXTURE_IMAGE_UNITS, &MaxGSGeometryTextureImageUnits);

The following is VS + GS + FS (available since GL 2.0)

 glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &MaxCombinedTextureImageUnits);

and the following is the number of texture coordinates available which usually is 8

 glGetIntegerv(GL_MAX_TEXTURE_COORDS, &MaxTextureCoords);

It looks to me like the shader needs to know what kind it is (vertex, fragment, geometry), then use the appropriate value, of:

vertex:  GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS
fragment:  GL_MAX_TEXTURE_IMAGE_UNITS
geometry:  GL_MAX_GEOMETRY_TEXTURE_IMAGE_UNITS  (gl 3.2+, if/when SFML supports geometry shaders).

Alternatively, if we're assuming that this is just for fragment shaders, then GL_MAX_TEXTURE_IMAGE_UNITS appears to be the correct query.
« Last Edit: April 11, 2015, 10:22:03 pm by Jabberwocky »

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #4 on: April 11, 2015, 11:06:13 pm »
We don't dictate how you use texture image units within your shader program. You are free to use as many of them as you wish in each of your shader stages as long as they don't exceed the limits of each stage like you quoted. The problem is that SFML only queries a single limit since it has no knowledge of how you distribute them between the stages, which means either we check against the minimum of all stages or the combined maximum.

Ideally there would be a way to check each stage individually, but that would require user interaction which would make usage of sf::Shader a bit more involved than it currently is. I think for most intents and purposes, checking against the combined limit should be enough. If you really have to know the limits of each shader stage, you might be using too many textures within a single stage anyway and are probably better off using something else like a buffer object instead.
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #5 on: April 11, 2015, 11:45:13 pm »
The problem is that SFML only queries a single limit since it has no knowledge of how you distribute them between the stages

I understand.

I think for most intents and purposes, checking against the combined limit should be enough.

I respectfully disagree.  I have trouble seeing how the current check is of any use at all.

For example, on my machine:
GL_MAX_TEXTURE_IMAGE_UNITS_ARB = 16
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS_ARB = 96

It seems highly unlikely I would ever be using 96 texture units in any individual shader.  In the unlikely event I was, that check would still be useless as I would very likely have exceeded one of the other limits for vertex, fragment, or geometry shaders.

What does seem a more common use case for a primarily 2D media library is to send multiple textures in a fragment program.  Maybe I'm implementing 2D lighting with a diffuse, normal, spec and occlusion textures, for instance.  Maybe I'm implementing a shadowing solution which requires a texture per visible light.  Maybe I'm multitexturing a single sprite or vertex array. 

Unfortunately, I simply can't trust the current SFML code to tell me how many texture units is safe to use.  It might be 4, 8, 16, 32... I have no idea. 

I understand the limitations behind the current check.  However, that check seems to fail in either the most common use case, or even any realistic use case I can think of.  If this code has any effect at all, it is likely only to mislead the programmer into believing he has made a relevant validation check, when he has not.

It's no problem for me to just change the SFML code for my own purpose, which is likely what I'll end up doing.  But while I was fiddling around with this code, I thought I'd bring up this shortcoming of the SFML code.
« Last Edit: April 12, 2015, 12:45:57 am by Jabberwocky »

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #6 on: April 12, 2015, 02:02:22 am »
I think you misunderstand what the point of that code segment inside Shader.cpp is. It's not about telling you how many texture image units are available. It's about making sure that whatever you pass to sf::Shader::setParameter can actually be bound properly when the shader is used. The binding is performed on the entire shader program and not a stage basis, so we aren't interested in the per-stage limit. There is also no way for you to ask SFML how many units are available per stage. That information is used exclusively inside the SFML shader code.
Unfortunately, I simply can't trust the current SFML code to tell me how many texture units is safe to use.
This is correct, because SFML doesn't provide you with this information, so there is nothing to trust anyway. ;)

I think you should do a bit more reading about how samplers and texture image units interact with each other. We need to make use of texture image units as a place to bind our textures to, so that a sampler can source from them. The whole concept of texture image units is independent of any per-stage limits. They can be used even in the absence of shaders, although it would make little sense. The total number of units available, as already stated, is queried through GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS. You can address them using indices GL_TEXTURE0 + 0 through to GL_TEXTURE0 + GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1.

The per-stage limits only come into play when actually binding samplers to texture image units. You can have 1024 samplers in a stage if you want, if you only bind 8 of them to texture image units, OpenGL will still be happy because this remains within the limits of that particular stage. SFML doesn't enforce this in any way, it lets you do whatever you want, and if the sampler binding fails, it will print an OpenGL error to stderr.

Also, I'm not sure what you would do even if you did know the limits of each individual stage. Like I said, you will still only have GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS units available, and have to map them to samplers in your stages, but currently, SFML takes care of this mapping for the user. It only makes sense to return the per-stage limits if we pushed mapping management over to the user, and that would make sf::Shader less useful than it currently is. If you are thinking that you can address texture image units on a stage-by-stage basis, then you should actually try to do this. You will realize that you will have to partition the complete range down into ranges for each stage and make sure they are always correctly mapped. It's not impossible to do this, but the S in SFML stands for simple, and we have to draw the line at some point.

I've always recommended people to just use raw OpenGL if the SFML constructs aren't cutting it for them, and your scenario is a really obvious case of such a situation. If you really need to know this information, then just use a raw OpenGL shader. ;)
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #7 on: April 12, 2015, 03:15:03 am »
I appreciate your detailed responses to this, binary.

I think you should do a bit more reading about how samplers and texture image units interact with each other.

I don't think there's any fundamental OpenGL I've misunderstood here.  I'm not sure what samplers have to do with my post at all.  I just want to know how many textures I can use in my fragment programs.  But please feel free to quote and correct any error I've made, I'm always happy to learn.

But you're right, I understand now that the GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS check isn't meant to do what I thought it was.  Maybe that was compounded by the 2.2 release code erroneously checking for GL_MAX_TEXTURE_COORDS_ARB.

Other graphics middleware I've worked with do provide an interface to query the max number of texture units for a fragment program.  So I guess I was looking at the SFML code through that lens.

Also, I'm not sure what you would do even if you did know the limits of each individual stage.

I would check to make sure my fragment programs could run on a particular computer, and if not, either:
  • exit with a "hardware not supported" error on startup,
  • or better yet, have a simpler fallback shader

Either is a better solution than running broken opengl.

But no worries, it's a simple check and I can implement it myself.

Thanks for your time.
« Last Edit: April 12, 2015, 03:57:49 am by Jabberwocky »

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #8 on: April 12, 2015, 05:47:45 am »
I'm not sure what samplers have to do with my post at all.  I just want to know how many textures I can use in my fragment programs.
Samplers source their data from texture image units. There are 2 bindings on the way from the texture object itself to the sampler inside a shader stage. The first one is the combined texture image unit binding, you can't bind more textures at a time in total than that amount. The second binding is from texture image unit into shader sampler. OpenGL probably does the mapping automatically for us, but there is a mapping somewhere. All that matters for that second one is that the number of active mappings doesn't exceed the maximum for the corresponding stage.

Needing to know how many textures you can use in a specific shader stage is the same as asking how many active sampler bindings can exist for that shader stage.

Other graphics middleware I've worked with do provide an interface to query the max number of texture units for a fragment program.  So I guess I was looking at the SFML code through that lens.
They also expose each shader individually and let you do things with them. The only point where SFML asks you about a stage is when you have to provide the source for the specific stage. Everything that happens after that happens at a program level. It doesn't matter whether your sf::Shader only has a vertex shader stage or only a fragment shader stage, SFML will treat it the same regardless.

exit with a "hardware not supported" error on startup
This doesn't seem very user-friendly and should really only happen as a very last resort to avoid undefined behaviour.

or better yet, have a simpler fallback shader
The question now then is: If this is possible, why not work bottom up and make sure that the shader stays simple enough to function on all hardware that is potentially supported? The limits these days are getting so large, to the point where I have to ask myself: Are people actually going to make use of this in a real-world scenario?

See, the thing about textures is that well... they're textures. The only difference between them and generic buffer objects are the samplers and texture operations that can be performed on them. In fact, OpenGL really doesn't care at all about how you end up using the data, which is why texture buffer objects exist as well. Just because you can make use of 32 samplers in a certain stage, does that mean that there really is no better alternative? I'm pretty sure there always is. The thing with textures is that they are just too easy to use and get away with. Rather than thinking of more clever ways of getting data into the shader, people just push the number of textures they end up using and obviously the GPU vendors follow suit and tailor the hardware/API to the usage patterns of the end users, the typical positive-feedback loop.

Just think about it, even in those big AAA games that are known to push GPUs to their limits, when you look at an object somewhere (normally a shader is used per material, so it is probably rebound whenever new object types are rendered) are you able to recognize all those... what... 8, 16, 32 textures that were potentially used to render it? I think textures are simply abused to do "other things", and given that OpenGL provides so many other (potentially better) ways to get the job done, there is no excuse to have to do things like this if you ask me.
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #9 on: April 12, 2015, 07:05:25 am »
Samplers source their data from texture image units.

... (cut) ...

Needing to know how many textures you can use in a specific shader stage is the same as asking how many active sampler bindings can exist for that shader stage.

Right.
i.e. "the number of textures a fragment program can use" (if the stage we're talking about is the fp)
i.e. GL_MAX_TEXTURE_IMAGE_UNITS

I do appreciate you going to the trouble to explain this.  But I'm not sure how or where I conveyed that I didn't understand it.  ;)  Anyway, let's move on.  No point debating further something we both agree on.

They also expose each shader individually and let you do things with them. The only point where SFML asks you about a stage is when you have to provide the source for the specific stage. Everything that happens after that happens at a program level. It doesn't matter whether your sf::Shader only has a vertex shader stage or only a fragment shader stage, SFML will treat it the same regardless.

Yep!
However, none of this invalidates the need to make sure my shaders will run on a particular piece of hardware, rather than simply malfunction.

This doesn't seem very user-friendly and should really only happen as a very last resort to avoid undefined behaviour.

Agreed.

The question now then is: If this is possible, why not work bottom up and make sure that the shader stays simple enough to function on all hardware that is potentially supported? The limits these days are getting so large, to the point where I have to ask myself: Are people actually going to make use of this in a real-world scenario?

When you sell a game commercially on somewhere like Steam, it's surprising how old of hardware you'll find, say in eastern European countries.  For some SFML users, maybe you have to consider weaker machinery like tablets, too.  Not that I have any idea what's actually inside those, but I assume it sucks.

As we both agree, it's nice to have a graceful exit or downgraded graphics in the case of weak/unsupported hardware.  Just assuming hardware is good enough is almost always a bad idea.

Also, say in the case of graphics options, you may want to allow a range of shaders, high res textures, etc.  However, you don't want to present those options on a rig that won't run it.  Again, you need to know what hardware capabilities you're working with.  Which all goes back to my original question/concern - "how many texture units have I got to work with in a fragment program?"

You're not going to convince me I don't need to know that.  ;)

In fact, even if you look at the SFML source for Shader.hpp, it comes with this nifty little comment:

    ////////////////////////////////////////////////////////////
    /// \brief Tell whether or not the system supports shaders
    ///
    /// This function should always be called before using
    /// the shader features. If it returns false, then
    /// any attempt to use sf::Shader will fail.
    ///
    /// Note: The first call to this function, whether by your
    /// code or SFML will result in a context switch.
    ///
    /// \return True if shaders are supported, false otherwise
    ///
    ////////////////////////////////////////////////////////////
    static bool isAvailable();
 

It makes no sense to check if shaders are supported, yet not also check that the capabilities are powerful enough to run your shaders.  You with me on this?  If not, we'll have to agree to disagree.

See, the thing about textures is that well... they're textures. The only difference between them and generic buffer objects are the samplers and texture operations that can be performed on them. In fact, OpenGL really doesn't care at all about how you end up using the data, which is why texture buffer objects exist as well. Just because you can make use of 32 samplers in a certain stage, does that mean that there really is no better alternative? I'm pretty sure there always is. The thing with textures is that they are just too easy to use and get away with. Rather than thinking of more clever ways of getting data into the shader, people just push the number of textures they end up using and obviously the GPU vendors follow suit and tailor the hardware/API to the usage patterns of the end users, the typical positive-feedback loop.

Just think about it, even in those big AAA games that are known to push GPUs to their limits, when you look at an object somewhere (normally a shader is used per material, so it is probably rebound whenever new object types are rendered) are you able to recognize all those... what... 8, 16, 32 textures that were potentially used to render it? I think textures are simply abused to do "other things", and given that OpenGL provides so many other (potentially better) ways to get the job done, there is no excuse to have to do things like this if you ask me.

That all makes sense to me. 

I'm not sure if you were suggesting I take the buffer object approach.  I'd happily do it if it was feasible.  Although the reason that I (and I expect most other users) come to use SFML is because:
1.  We don't have the expertise or time to write something better or more complex ourselves.
2.  Our (mostly 2D) games/apps don't need cutting edge optimization to run at a reasonable framerate.

I very much notice and appreciate the optimizations you and the SFML team make.  You know, stuff that all works "under the hood" for us pleebs.  ;)  But coding up a solution like this for my game is a little above my pay grade, and probably not an appropriate time sink for an indie game dev in a 2D game  Maybe you weren't necessarily suggesting I do that.  Rather, you just needed to "rant" (in a good way) about the inefficiency of the texture vs buffer object approach.  Although SFML does use this inefficient approach as well.

Regardless, interesting reading!
« Last Edit: April 12, 2015, 07:22:23 am by Jabberwocky »

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #10 on: April 12, 2015, 11:15:20 am »
I agree that there are legitimate cases where the developer has to know the per-stage limits, such as having to deal with pretty crappy hardware without having to be too conservative. The question now really is: Is this SFML's responsibility?

What you are asking is kind of pushing on the boundary of what can be considered in SFML's scope. You have to understand that a long time ago, sf::Shader wasn't even called sf::Shader yet, it was called sf::PostFX, which kind of hints at what it was originally designed to do. Simply put: It was meant as a way to "do the stuff that would be impossible to do without it". The PostFX concept is even more generic than the current sf::Shader one. When designing an API, one must consider its guaranteed behaviour as opposed to how the underlying implementation is to be exposed to the user. From that point of view, sf::PostFX could have been implemented through OpenGL shaders, DirectX shaders, or something completely different. That is the primary reason why details such as the per-stage texture image unit limit never crept out, it was never meant to.

I have to admit, over the years, SFML has exposed more and more of its OpenGL implementation through the API, to the point where I am even pushing for making interoperability between SFML code and raw OpenGL code as simple as possible. The capabilities of sfml-graphics objects such as sf::Texture and sf::Shader that do nothing else but encapsulate OpenGL objects should be easily extensible by the user if they wish.

The first step in that direction was already made in this commit. Basically, using any revision after that commit, the user can extend SFML's capabilities by mutating the OpenGL object directly themselves, if they feel it is necessary. This is even noted in the documentation of the newly added methods. Using the latest master revision, it should be possible to grab arbitrary OpenGL function pointers through SFML and eventually use them to perform whatever you want to directly on the object handles that are exposed as well.

Like I said above, what you need is very specific, too specific to be included in the SFML API. But since I have lobbied hard for making OpenGL interoperation as painless as possible, you are able to get what you need done without any/too many hacks. Interoperation might probably become even easier to do in the future. ;)
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #11 on: April 12, 2015, 06:47:59 pm »
The scope question is legit for sure.

In general, I really like SFML's API and level of abstraction from opengl.  I also understand it's a bit of a balancing act to keep things simple, yet also reasonably powerful, flexible. and fast.

I would argue, although partially for self-serving reasons ;), that hardware capacity checking should be part of SFML's scope.  Even a "simple" program should verify it's ability to function properly, unless it's purely a hobby or learning pursuit.  It wouldn't necessarily have to be a complex implemenation.  It could simply be struct like ContextSettings that gets filled in with stuff like max texture size, GL_MAX_TEXTURE_UNITS, a geometry shader support check (should that be added to SFML), a mipmap support check, shader profiles supported, etc.  Then, push the responsibility to the user to do with this data what they wish.

This would be "simple" in that the casual user could totally ignore this HardwareCapabilities struct.  On the flip-side, for any more serious software release, the programmer could query it and make it's own decisions how to handle this info, and validate or alter the program's functionality (e.g. simpler shaders).  And this could be done without touching any opengl, remaining true to the spirit of SFML's "simple" API. 

By not having it, it's no more simple for a casual user, but much less simple for any programmer who is writing a serious software release.  You're basically "forcing" all those users to write this low level code themselves.  That's the opposite of simple.  Because it's like SFML does 95% of the base functionality that every fully robust app will need, but leaves that 5% where the user must delve into the very low level opengl code that SFML otherwise successfully abstracts away. 

To me, the "S" in SFML is largely about allowing me to avoid low level opengl.  But SFML also allows for layers of depth.  A casual user can create a few sprites and move them around.  A more advanced user can employ VertexArrays, Shaders, RenderTextures, etc.  The casual user can ignore those.  You guys make the use of these more advanced concepts (shaders, render textures, etc) MUCH more pleasant to work with ("simple") than writing the raw opengl.  So again, I think you have been largely successfully here.  Except, perhaps, a few little edge cases like this.

Regardless, I understand there will be different opinions on this kind of stuff.  And you guys have done a remarkable job with SFML.  So I ain't complaining, just offering a point of view on a few of these unaddressed, but arguably essential graphics features.

Once again, thanks for the discussion binary.

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #12 on: April 13, 2015, 05:56:48 am »
Even a "simple" program should verify it's ability to function properly, unless it's purely a hobby or learning pursuit.
That's what the OpenGL specification is for. Conforming implementations guarantee a minimum working set of capabilities on which software can rely on. I don't know what you are thinking, but none of those values you are querying is allowed to return 0, at least since 3.0 according to the official reference page for glGet(). In fact, the minimum allowed values are fairly generous for all but the most demanding applications if you ask me. Worrying that a program will fail to function properly because there might not be enough resources to use is not an OpenGL thing. It might be the case for other APIs like DirectX, but when you know that industrial and other more "serious" applications rely on your specification, you leave less room for vendors to interpret your requirements.

It could simply be struct like ContextSettings that gets filled in with stuff
Having a central location for such a broad range of unrelated attributes isn't good style if you ask me. Any time a feature is added/changed that relies on such values, changes will have to be made to multiple modules instead of solely at the site at which the value is relevant. This increases coupling, and makes proper maintenance of the code slightly more tedious with every iteration. The sfml-window module's sole purpose is to provide a working GL environment on which other modules or your own code can build on top of. It shouldn't try to take over responsibility when it doesn't have to. Sure, you and other coders might want it to, because we end up saving you work, but keeping the implementation clean and easily maintainable has higher priority, especially given SFML's limited development resources.

max texture size
sf::Texture::getMaximumSize()
GL_MAX_TEXTURE_UNITS
Too specific, can perform on your own with current master.
geometry shader support check (should that be added to SFML)
#428
a mipmap support check
#123
shader profiles supported
Can be derived from context version.

This would be "simple" in that the casual user could totally ignore this HardwareCapabilities struct.  On the flip-side, for any more serious software release, the programmer could query it and make it's own decisions how to handle this info, and validate or alter the program's functionality (e.g. simpler shaders).  And this could be done without touching any opengl, remaining true to the spirit of SFML's "simple" API.
There are 2 kinds of "Simple", and over time I feel SFML is diverging from one to the other (which is not necessarily a bad thing).

The first kind, probably the kind many beginners have in mind when they start using SFML is the "I don't need to know how the stuff works" simple. They start using SFML, try to realize whatever they had envisioned using the tools provided to them, and any time lower level knowledge becomes unavoidable, they end up posting on the forum. SFML hides the lower level implementation from them, shielding the nastiness of OpenGL from people who are already struggling just learning C++. SFML tries to hide the details from them the best it can, but at times it just can't keep the promise and has to let it seep out in some way.

One of these is obvious with sf::Shader. Obviously the user will need to know GLSL to make use of it, and in addition to that they will need to know everything else related to using shaders properly, such as proper texture management and uniform specification. At this point SFML just gives up and doesn't try to hide anything from the user any longer. If they need special functionality for their specific use case, they will have to take care of it themselves.

In the past, this was always hard to do, because SFML actively prevented users from extending the functionality of its classes when they just wouldn't cut it. This lead to users mixing sfml-graphics and raw OpenGL code way more than necessary, which is why I tried to improve the situation by making it less necessary now than it previously was.

The second kind, and the way in which I use SFML is the "save me from having to write 1000s of lines of code" simple. SFML can simplify the code that people would have written without it, while making sure not to limit them in unreasonable ways. If you are already a seasoned OpenGL programmer, then you have probably gone through countless cycles of writing the same old boilerplate code over and over again. In this situation you are happy for every detour you can take that meets your requirements. A library might not guarantee that a detour exists for everything you are trying to do, but even if it increases your overall productiveness by just a bit, it would have already done its job.

I don't know about you, but I find many programmers these days unreasonably assuming that they should only have to use a single library and that library has to do everything they need it to do in their specific project. There are many libraries like this out there already, and over time they just get bloated to the point where I personally consider them unusable for the majority of my use cases. Instead of aiming to cover the superset of all use cases, SFML takes a different approach and tries to cover the common subset of all use cases in order not to introduce too much bloat for everybody. I'm not saying that the amount of subjective bloat will be the same for everybody, but it shouldn't reach a point where people have to start considering whether the library might actually hinder them from working effectively.

People really shouldn't be scared of having to combine multiple libraries in their projects, it has been said time and again: SFML isn't a game engine. SFML might be a maverick in this sense, but until now it hasn't harmed it in any way, and to maintain this stance, we simply have to be stricter than other libraries when it comes to the definition of the API.

By not having it, it's no more simple for a casual user, but much less simple for any programmer who is writing a serious software release.
If you replace "any programmer who is writing a serious software release" with "programmer who is used to how 90% of all other libraries function" you might be on to something. ;) Seriously though, the only reason I can think of for why people ask open source libraries to implement application specific feature XYZ is because a) it would be free of charge to them and b) it would save them manhours which they could spend on something else instead. If libraries charged their users for implementing very specific features, they would probably think twice about whether they should implement it themselves instead. We develop with the community in mind, and not single users.

You're basically "forcing" all those users to write this low level code themselves. That's the opposite of simple.
Do you really expect any arbitrarily selected application requirements to be reducible to an implementation that you will always deem "simple? It's a simple fact of life, not only in software development. The more complex the task at hand, the more effort it will take to fulfil said task. Building an airport will take considerably more work than building a house, even if the contractor has a lot of experience building runways and terminal buildings.

What I find really annoying is a current trend of programmers boasting resumes containing lines like "Has experience with library X, Y and Z" being employed in positions which might end up requiring a bit more expertise than that. If all these people really knew how to do is use said libraries, then what are they going to do if the libraries can't fulfil their employers' requirements? Go to the library and ask them to implement it just because that's the only thing they know how to use? If they ended up saying no, I have no idea what the company would end up doing, probably just dropping the feature altogether if I didn't know any better.

I kind of miss the days when people were considered "graphics programmers" or "network programmers". They would have the expertise to get the job done, no matter what the situation looked like. At the end of the day, when you have to ship the product, it doesn't matter how it is implemented, only that all requirements are met. Real programmers wouldn't complain about having to go a few levels lower when it was necessary, and would rejoice whenever a library did indeed end up saving them work. Too much burden is placed on libraries these days, simply because the people who claim they are "programmers" can't go through their routine of simply plugging modules together and ending up with a finished product. People have wondered why I consistently detest the Java "programming language", now they might have a few more clues.

Because it's like SFML does 95% of the base functionality that every fully robust app will need, but leaves that 5% where the user must delve into the very low level opengl code that SFML otherwise successfully abstracts away.
Same argument as above. If SFML really went out of its way to make sure 100% was covered in every scenario, you might have forgotten that those 5%s add up over time to something that is truly monstrous.

To me, the "S" in SFML is largely about allowing me to avoid low level opengl.
Avoiding is not the same as ignoring. I consider the former optional, while the latter is what some people hope to get away with, but end up burning themselves with. If you know what you are doing is beyond the capabilities of what most SFML users might make use of, always be prepared to write OpenGL code.

But SFML also allows for layers of depth. A casual user can create a few sprites and move them around.  A more advanced user can employ VertexArrays, Shaders, RenderTextures, etc.  The casual user can ignore those.  You guys make the use of these more advanced concepts (shaders, render textures, etc) MUCH more pleasant to work with ("simple") than writing the raw opengl.  So again, I think you have been largely successfully here.  Except, perhaps, a few little edge cases like this.
You consider this an edge case, I consider this as the next layer of depth in your words. Just because OpenGL symbols aren't in the sf:: namespace, doesn't mean that it is not considered as the next logical step after usage of sf::Shaders and sf::RenderTextures. We could just pull everything into an sf::ogl:: namespace, but since it would serve no practical purpose, we just recommend anybody who is interested to learn more about OpenGL and how they can use it alongside SFML.
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Jabberwocky

  • Full Member
  • ***
  • Posts: 157
    • View Profile
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #13 on: April 13, 2015, 10:48:35 am »
In fact, the minimum allowed values are fairly generous for all but the most demanding applications if you ask me. Worrying that a program will fail to function properly because there might not be enough resources to use is not an OpenGL thing.


This doesn't make sense to me.

Maybe an SFML user is working on your new android port.  GLES2 defines a minimum texture size of 64x64.  So by your logic, nobody should use any textures larger than 64x64 on android?  That's absurd, and likely why SFML has implemented a resource check on the max texture size.  I fail to understand what's so different about texture units.

SFML's default OpenGL version on windows is 2.0, which only has a minimum number of 2 texture units for fragment programs.  Obviously most machines you encounter will have more than that.  But again, relying on the minimum isn't useful here. 

Sure, I could just enforce a higher OpenGL version.  But why would I want to artificially limit my game to run on OpenGL 3+ when I can get it to run on 2.x with a few graphics adjustments?  There's lots of old hardware out there.  Those cards may even be powerful enough to run at full graphics settings. 

Anyway, while we disagree on the above, I can respect your decision about what is and isn't in the scope of SFML.  So we can leave it there, if you like.

Having a central location for such a broad range of unrelated attributes isn't good style if you ask me.


Fair enough.
And thanks for listing which functions do already exist, or are planned.

There are 2 kinds of "Simple"...

Good explanation.  I'm glad SFML is taking the route you describe.

I don't know about you, but I find many programmers these days unreasonably assuming that they should only have to use a single library and that library has to do everything they need it to do in their specific project.

Agreed.  In the games industry, indies have flocked to Unity3D, which now also supports 2D stuff.  So yeah, that trend is definitely visible.  I can't stand working in a black box, closed source environment.  Which is why I use middleware like SFML.  It's definitely a lot more work, but the alternative is quite distasteful to me.

Seriously though, the only reason I can think of for why people ask open source libraries to implement application specific feature XYZ is because a) it would be free of charge to them and b) it would save them manhours which they could spend on something else instead. If libraries charged their users for implementing very specific features, they would probably think twice about whether they should implement it themselves instead. We develop with the community in mind, and not single users.

I'm a little disappointed in how you've interpreted this discussion.  I'm not here to get free labor out of you.  Although I do appreciate your expertise and willingness to talk here on the forum. 

Honestly, I already implemented what I needed in a fraction of the time we've both put into this thread.  I just wanted to propose what I thought might be a fairly simple change that would make SFML more useful to many users, not just me.  Shaders are not application specific.  They're a core part of SFML's API.

You may have insinuated I'm not a "real programmer" because I made this feature request.  Or maybe this was just a general rant.  Either way, I think it's best I leave that bit alone.  ;)

Thanks again for your time.
« Last Edit: April 13, 2015, 10:53:17 am by Jabberwocky »

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
Re: GL_MAX_TEXTURE_IMAGE_UNITS?
« Reply #14 on: April 13, 2015, 11:52:03 am »
Maybe an SFML user is working on your new android port.  GLES2 defines a minimum texture size of 64x64.  So by your logic, nobody should use any textures larger than 64x64 on android?  That's absurd, and likely why SFML has implemented a resource check on the max texture size. I fail to understand what's so different about texture units.
I was referring exclusively to the texture image unit minimums, not the texture size minimum. The difference between the maximum texture size and the number of texture image units is that the former has been around for much longer, and is something SFML guarantees to provide, even on an OpenGL 1.1 context. The whole question of texture image units doesn't make much sense unless shaders are supported, which is not always the case and which is why you can check for optional shader support.

SFML's default OpenGL version on windows is 2.0, which only has a minimum number of 2 texture units for fragment programs.  Obviously most machines you encounter will have more than that.  But again, relying on the minimum isn't useful here.

Sure, I could just enforce a higher OpenGL version.  But why would I want to artificially limit my game to run on OpenGL 3+ when I can get it to run on 2.x with a few graphics adjustments?  There's lots of old hardware out there.  Those cards may even be powerful enough to run at full graphics settings.
I never said you should limit yourself to a specific OpenGL version, and trying to do so would often not get you anywhere anyway, as the implementation is allowed to create any newer version that is fully compatible with whatever you requested. The minimum limits are defined by version, and often for good reasons. If a system was only able to support up to 2.0 for whatever reason, then pushing your shaders to the maximum might not even be such a great idea in some cases.

Unlike DirectX, an implementation is allowed to advertise capabilities as long as they are supported, somehow. A legitimate disadvantage OpenGL has had since the beginning in comparison to DirectX is the lack of a way to check if a feature is truly implemented in hardware or is a mix of hardware/software or even fully implemented in software. This leads to strange situations where features that seem to be supported might reduce the rendering performance of an application down to unacceptable levels. For this reason alone, some OpenGL developers don't even rely on the advertised limits, rather judging by the "hardware class" of the GPU in question to enable certain features or not.

In your specific example, while 2.0 requires at least 2 texture units for fragment programs, even if the system advertises 8, it might lead to a significant performance impact since the latter 6 might not be fully accelerated. On the contrary if a legacy 3.2 context is created instead, you can probably rely on at least 16 units being available that are fully accelerated.

I've had cases where I also used to push GPUs to their maximum (according to the glGet limits) and noticed a performance drop at the higher end of the spectrum. Since then, I've made it a habit to stay well under the advertised limits in order not to "choke" the hardware. Like I said, I can't say for sure how AAA OpenGL games handle this, but I am pretty sure they don't probe the capabilities of single GPUs, instead they sort them into hardware classes for which they had done extensive testing during development. If you plan on supporting a wide range of graphics hardware generations, this is really the only sane thing to do.

I'm a little disappointed in how you've interpreted this discussion.  I'm not here to get free labor out of you. Although I do appreciate your expertise and willingness to talk here on the forum.

Honestly, I already implemented what I needed in a fraction of the time we've both put into this thread.
I know you've already implemented this yourself and are not asking for free labour, I was referring to feature requests in general, sometimes for seemingly trivial things that are obviously way beyond the scope of SFML. The best way for something to be implemented in SFML is a working implementation to accompany the feature request. Since people often fail at that already, I just have to assume that they didn't give the possible implementation enough thought, and decided to request a feature that they happened to need for the project they are currently working on. It's happened many times before, just browse through the feature request forums. ;)

I just wanted to propose what I thought might be a fairly simple change that would make SFML more useful to many users, not just me.
This is something that many people who have proposed features in the past seem to assume. I don't know about you, but I've seen a fair share of SFML applications and other use cases, and none of them rely on knowing the texture image unit limit in order to function. What I always recommend people to do, in case they think we have misjudged the usefulness of their suggestion, is to find people/projects that show that it is a direly needed feature. Nothing speaks more words than a bunch of links to the code of SFML projects that all contain the same OpenGL code that is lacking in SFML. Often times, suggestions already fail to gain any momentum at this step, and unfortunately I think your suggestion would to. ;)

Shaders are not application specific. They're a core part of SFML's API.
While they are part of the SFML API, I wouldn't say that they are part of the core API. A developer that has to support hardware that may very well be ancient can't rely on shaders being universally supported. Instead of enabling functionality that is only possible in the presence of shaders, they choose to just ignore them. sfml-graphics is completely usable without shader support. In fact, SFML's sf::RenderTexture implementation has support on all hardware due to the implemented fallback implementation, so sf::Shader is really the only thing that you can't rely on being available, and thus far from being considered core in any way.

While you are correct as well in that shaders in general are not application specific, specific shader use cases are. Unfortunately, not many people before you have had to push them to their implementation limits, so your specific use case ends up being rather application specific.

You may have insinuated I'm not a "real programmer" because I made this feature request.
Not at all. If you belonged to that specific class of "programmer" I was referring to, you would still be pushing for this feature without having even a clue of what to do in case it didn't get implemented. ;) I knew from the start you already had something that worked for you, so I was just expressing my personal opinion about those that are obviously not nearly as capable as yourself. Those people can at times turn out to be very demanding when it comes to their expectations of what is to be implemented, even going so far as threatening to run away to another library altogether, like that is going to convince us. ::)
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

 

anything