(This theory is totally incorrect, something else happens, something windows specific.)I'm really not sure of the details and it's too late for me to read into the code right or search around right now but I think this is about right. I think it has to do with spooky action at a distance between contexts and threads.
Another 'solution' is just creating an sf::Context in first line of your main and letting it stay there, thus creating the context explicitly in that thread. Creating a thread that just runs forever and only creates an sf::Context would work too actually (new an std::thread and give it a function that only created a local sf::Context on the stack and does while(1)
.
Contexts are per thread and fonts are loaded into textures lazily. If you call get local bounds it'll end up building text vertices and loading the glyphs to get their sizes, kernings and so on. That'd cause a context to be implicitly created behind the scenes in the main thread.
If you don't do that then the first call that will cause textures to be loaded will be the draw call in the thread, context that it happens in is created and destroyed by the sf::RenderWindow (or rather it's base sf::Window part) in the thread.
So I'm not sure if this is a thing.. the texture (the GL one referred to by that unsigned) that sf::Texture in font pages hold gets sort of orphaned..?
I think the problem is you reach 0 contexts between creating the texture and destroying it.
Without your fix it's like: context in thread is made (1), texture is created, context in thread is destroyed (0), texture is destroyed (so an implicit context is made but we reached 0 so all GL stuff was thrown away so this unsigned ID is not valid).
With the fix it goes like: context in main (1), texture is created, context in thread is created (2), context in thread is destroyed(1), texture is destroyed (there was never 0 contexts between here and creation of it so ID is valid still), context in the main gets destroyed (0).
Similarly with a runaway thread that just creates a context or with an explicit context created in main. The point is to not get to total GL shutdown (0 contexts) and orphaning resource ids because then C++ classes dtors will try freeing them in a fresh context, unrelated to all the previous ones.
I recall there was an overhaul of context handling a while back too. Before that I think there was a global hidden context that kinda leaked, and that prevented problems like these. But I'm also not sure on that right now. Maybe you could find something on the forums.
There is some mention of context and thread management in here:
https://www.sfml-dev.org/tutorials/2.4/window-opengl.phpThis might help understanding threading + GL contexts:
https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_threading/opengl_threading.htmlAnd I recall that it often gets problematic, especially if stuff starts flying around different threads, like I'm not sure how relevant this is in Win10 era (I doubt multi threaded GL that they labeled legacy years ago is some sort of a holy grail for Microsoft programmers in face of Vulkan and DX11 and 12 though..), this is from a 5 years old interview:
John Carmack - This was explicitly to support dual processor systems. It worked well on my dev system, but it never seemed stable enough in broad use, so we backed off from it. Interestingly, we only just found out last year why it was problematic (the same thing applied to Rage’s r_useSMP option, which we had to disable on the PC) – on windows, OpenGL can only safely draw to a window that was created by the same thread. We created the window on the launch thread, but then did all the rendering on a separate render thread. It would be nice if doing this just failed with a clear error, but instead it works on some systems and randomly fails on others for no apparent reason.