I'm really sorry, I don't have time to write and test a solution for you.
I think you should:
- create a public function named "releaseThreadContext"
- inside this function, you remove internalContext from the internalContexts array and then you delete it
But this will work only if you nothing after executing this function (ie. not calling setActive(false) or destroying a render-window/render-texture).
#ifdef LINUX
#include <X11/Xlib.h>
#endif
#include <SFML/Graphics.hpp>
#include <thread>
int main()
{
#ifdef LINUX
XInitThreads(); // Required on Linux to handle multiple threads with windowing.
#endif
// The drawer thread, active during the entire program's lifetime, constant drawing ////////////
std::thread([&]()
{
sf::RenderWindow wnd(sf::VideoMode(300, 200, 32), "Title");
wnd.setFramerateLimit(1);
while (true)
{
wnd.clear();
wnd.display();
}
}
).detach();
//////////////////////////////////////////////////////////////////////////////////////////////
// Simulation of switching states ////////////////////////////////////////////////////////////
while (true)
{
std::thread([&]()
{
sf::RenderTexture *rt = new sf::RenderTexture;
delete rt;
sf::GlResource::releaseThreadContext();
std::this_thread::sleep_for(std::chrono::seconds(1));
} // rt should be destroyed here, but its effect on the memory remains.
).join();
}
//////////////////////////////////////////////////////////////////////////////////////////////
return 0;
}
I gave sf::GlResource the following public static member function:
static void releaseThreadContext();
---
void GlResource::releaseThreadContext()
{
priv::GlContext::releaseThreadContext();
}
So I thus gave sf::priv::GlContext the following public static member function:
static void releaseThreadContext();
---
void GlContext::releaseThreadContext()
{
sf::Lock lock(internalContextsMutex);
internalContexts.erase(internalContext);
delete internalContext;
}
However, no resources are freed in my testing scenario. I wonder what I am doing horribly wrong.
EDIT:
I replaced the
void GlContext::releaseThreadContext();
internals with the following:
sf::Lock lock(internalContextsMutex);
for (std::set<GlContext*>::iterator it = internalContexts.begin(); it != internalContexts.end(); ++it)
delete *it;
internalContexts.clear();
And it seems to work even when I have already-existing sprites being drawn, among other things. I'll now test it in my program to see if it works just as well there.
EDIT2: TOTALLY AWESOME! IT WORKS! Aaaah finally, after all these hours (so happy). I basically call sf::GlResource::releaseThreadContext(); after my drawing thread finishes, so another drawing thread can start drawing. This works perfectly, no segfaults, no error messages from OpenGL, nothing. Absolutely wonderful. Magnificent-> *good feels*
. But now it would need to be tested thoroughly. I really hope that this is something that stays eternally stable. It should also be a nice feature to add officially to make SFML more thread-friendly... Aaah, now I can rest