Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: glGetError with Threaded Optimization causes high CPU usage despite vsync  (Read 3184 times)

0 Members and 1 Guest are viewing this topic.

dialer

  • Newbie
  • *
  • Posts: 7
    • View Profile
This is more informative than a question or a feature request. I'm making this thread because other people must have had this problem, yet I was unable to find anything coherent on google that presents the issue well enough and I want to save others from the frustration.

Problem:

When rendering stuff with vsync enabled, you would expect the CPU usage to be reduced because the application is blocked. This is not the case when using sfml in debug mode; CPU usage is 100% on one core.

Explanation:

This happens if you have "Threaded Optimization" enabled in the NVidia Control Panel (it is enabled by default). The issue is that in SFML debug builds, after every OpenGL call, the glGetError function is automatically called to help finding errors. That function causes a busy wait.

Typically (with vsync enabled) (though this is technically not specified), the call to Window::display (or rather, SwapBuffers) causes the thread to block (non-busy wait) until it is appropriate to actually swap the buffers with respect to vsync. All the other rendering functions that OpenGL offers are highly asynchronous to improve performance. Calling glGetError (and a bunch of other functions that access the current rendering state in a consistent way) requires the CPU thread to synchronize with the rendering process, and it isn't very smart about it. If vsync is enabled, it spins until the next frame is synchronized, leaving no time for SwapBuffers to wait in a sensible manner. In other words, you should absolutely not call glGetError each frame (definitely not in a Release build, IMO not even in a Debug build).

Afterword:

In SFML, the error checking is currently disabled via a compiler switch in the Release build (GLCheck.hpp). However, in case you want to avoid high CPU load even in Debug builds, I propose a different approach. Allow short-circuiting the error checking with a global bool. Yes, that sounds like an intern writing nightmarish code, but that was a reasonable choice for me at least. I actually have that in my Release build as well.

It allows you to enable error checking during interesting sections that don't happen every frame, while having it disabled during normal, dull rendering. It also allows you to easily switch it on and off for a few frames, should you ever need to actually debug the dull rendering.
« Last Edit: August 06, 2017, 10:31:41 pm by dialer »

eXpl0it3r

  • SFML Team
  • Hero Member
  • *****
  • Posts: 11028
    • View Profile
    • development blog
    • Email
Interesting to hear that the glCheck are what's causes the culprit with NVidia's Threaded Optimization.

Personally, I don't see the need to change anything. When you're in debug mode, you're in a developing state.

You accept that there are performance costs.
You are able to shape your development environment accordingly,e .g. disable Threaded Optimization.
You do want to get all the possible errors.
If you do want to disable the checks for something, you can simple undefine SFML_DEBUG.

As we get a lot of novice programmers, it's important that generated errors are by default displayed to the user, so they at least know something is going wrong. Additionally if OpenGL (or OpenAL) issues occur and error messages were disabled by default, we'd end up having to explain every single time how to enable error messages and then the user has to recheck.
For advanced developers who think they can do better, there is, as mentioned, the undefining of SFML_DEBUG which will disable the error checking. ;)
Official FAQ: https://www.sfml-dev.org/faq.php
Official Discord Server: https://discord.gg/nr4X7Fh
——————————————————————
Dev Blog: https://duerrenberger.dev/blog/

dialer

  • Newbie
  • *
  • Posts: 7
    • View Profile
To be clear: I'm not requesting that this should be changed, but I do think it should be mentioned in the documentation, and also very early in those tutorials.

I just wanted to add that I personally think it could be a good idea to disable it upon explicit request during *runtime* (note *disable*, meaning I agree that it should be enabled by default), and if others who stuble upon the problem agree, they can easily implement it themselves (after all, developing with the SFML sources is pretty reasonable, unlike Some Dumb Library I could name, so I have my own modified version of SFML anyway).

dialer

  • Newbie
  • *
  • Posts: 7
    • View Profile
Apparently, wglMakeCurrent also causes that, which is a much bigger problem than glGetError. This is an issue with OpenGL though, not with SFML.

That means that you cannot switch GL contexts on each frame, and that means that you cannot render to RenderTextures on each frame.

On top of that, I encountered another problem with vsync and posted that on stackoverflow:

https://stackoverflow.com/questions/45676892/reliable-windowed-vsync-with-opengl-on-windows

I'm starting to believe that vsync with OpenGL is the devil and NOBODY has ever tested it thoroughly before. Speaks volumes about the quality of the OpenGL developers' work IMO, but I'm also beginning to understand why Jon Blow thinks OpenGL is designed by crazy people.

 

anything