Yes, sorry about that, my post was heavily rushed as it was getting really late for me. At least I know someone is reading. :p
1. It would be nice to know exactly where the GL_INVALID_OPERATION comes from. SFML tells you the exact line number when building in debug.
Texture.cpp, lines 490, 516, 517, and 520. Looking at the source, these are when calls to glBindTexture, glMatrixMode, and glLoadMatrixf are made.
2. Threads + (it works sometimes and sometimes not) sounds like your typical race condition to me. SFML wasn't designed to be thread safe, and although OpenGL contexts are supposed to enable multi-threading without explicit synchronization, I really wouldn't rely on it. Where do you synchronize? I don't see it in the code you provided.
Yeah, definitely a typical data race. The thing that confuses me the most however is that (as I tried to explain in my initial post, but failed horribly), the thread has practically exclusive access to the data; other than the use of the sf::RenderTexture to draw the animated loading screen stuff (which was already loaded before the thread is even launched), the main thread and the loader thread do not interact with the same data, because the main thread is more-or-less doing nothing until the loader thread signals it is done (by calling Gamestate::Notify()).
As for synchronization, the threads are never synchronized (i.e. calling join()); after completing their task they are put to sleep until notified to run another task. This is, at its core, how a thread pool works. Do you think adding synchronization could have an effect, seeing as how SFML is not particularly thread-safe as you mentioned?
3. Why are you using glFlush() and glFinish() everywhere? glFlush() makes sense, but only when you tell OpenGL to actually do something prior, which you do in 2 of 3 blocks. glFlush() followed directly by glFinish() makes no sense at all. glFinish() implies glFlush(). glFinish() is evil, try to never have to use it, it will kill any advantage the asynchronous nature of OpenGL provides you. In your application, it will even synchronize with all other contexts (and threads) since it is a GPU synchronization command. If they are there just for debugging then you could have mentioned it too.
Yeah it was just an attempt at debugging the situation. I had read that glFinish() and/or glFlush() would solve potential issues where a texture wasn't fully uploaded but something tried to access it, causing the data to corrupt and potentially causing these problems. Although I appreciate that insight into the difference of the two.
I assume that the loader takes a bit to load and parse the files, which is the reason why you threw that part into a thread. It sure won't help with the time it takes OpenGL to do stuff (which happens immediately from our perspective). Did you measure where most of the time is spent? Because if the file loading and parsing doesn't take any time at all, I don't see how throwing all that stuff into a separate thread can help.
Relatively speaking, the loader does take some time to complete (relatively being about 3-5 seconds on my machine), on a level that isn't fairly large (84x32 tiles of size 32x32), using a tileset of only 25 different tiles. Because of the time it takes (which will likely scale up with the size and complexity of levels), I placed it in another thread so that a loading screen could be done and animated without being blocked (since the loader will not return until it has finished or failed at some point).
On the other hand, I have not taken exact measurements of where the majority of time is spent, and I could do that. Although my best guess would be in loading the textures in the TMX loader. It seems that the TMX loader actually loads the files into sf::Image before uploading them to an sf::Texture, which is probably helpful in regards to OpenGL, but I doubt that one way or the other it should have an impact here.
My initial guess is that your code relies on "unreliable" cross-context behaviour that even the specification doesn't guarantee. In order to tell what is really going on, we will need much more information than you have provided us. We need to know what contexts elsewhere are doing as well.
If you can't provide more code for certain reasons, try to construct a minimal example that mimics the way your application uses contexts in a multi-threaded way. Run it multiple times, and if the same error occurs you can post that here instead.
It certainly seems to be a context problem. I'll try to come up with some sort of example, although I may have narrowed the problem down slightly. It's a problem with LTBL (and I probably should have mentioned the use of that library in my hastily-written first post
). To try to sum things up, this error started happening when I switched over to the use of RenderTextures instead of drawing to the RenderWindow directly (because I wanted to remove the visual distortion I was getting, and found that drawing to a RenderTexture, then applying the texture to a sprite and scaling it up to the size of the window gave pixel-perfect results).
However, LTBL only accepted a RenderWindow as one of its parameters for the LightSystem class. See here:
LightSystem::Create(const AABB ®ion, sf::RenderWindow* pRenderWindow, const std::string &finImagePath, const std::string &lightAttenuationShaderPath)
I figured they both utilized sf::RenderTarget, so changing the parameter to a RenderTexture (and any other references needed) should do the trick.
LightSystem::Create(const AABB ®ion, sf::RenderTexture* pRenderWindow, const std::string &finImagePath, const std::string &lightAttenuationShaderPath)
Then for the parameter, I just pass in the RenderTexture I use that is defined in my App class, using GetRenderTexture(). The class looks like this:
class App
{
public:
App(std::string app_name);
void Init();
bool IsRunning() const;
void StopRunning();
void ShutDown();
void ShutDownRenderer();
void RestartRenderer();
sf::RenderWindow *GetWindow();
sf::RenderTexture *GetRenderTexture();
float GetDeltaTime();
void SetDeltaTime(float dt);
ConfigFile *GetConfig();
void SetFramerateLimit(int fps);
void Error_Fatal(std::string errmsg, ...);
void Error(std::string errmsg, ...);
double frameTime;
private:
void InitRenderer();
int fpslimit;
float deltaTime;
sf::RenderWindow *m_window;
sf::RenderTexture *m_renderTexture;
sf::ContextSettings m_settings;
sf::VideoMode m_vidmode;
sf::Context context;
ConfigFile *cfg;
std::string m_name;
bool m_running;
};
(the extra sf::Context is from some debugging, although I have yet to test removing it actually. I'll give that a shot right now; it is instantiated before the RenderWindow and RenderTexture are, after all. But since Contexts are supposed to be shared by default, this shouldn't affect anything should it?)
I'm unsure if you're familiar with the LTBL source--it uses a lot of raw OpenGL. When stepping through the debugger, this function in particular is where the glErrors are thrown:
void LightSystem::RenderLightTexture()
{
Vec2f viewSize(m_viewAABB.GetDims());
m_pWin->setActive();
// Translate by negative camera coordinates. glLoadIdentity will not work, probably
// because SFML stores view transformations in the projection matrix
glTranslatef(m_viewAABB.GetLowerBound().x, -m_viewAABB.GetLowerBound().y, 0.0f);
//m_compositionTexture.getTexture().bind();
sf::Texture::bind(&m_compositionTexture.getTexture());
// Set up color function to multiply the existing color with the render texture color
glBlendFunc(GL_DST_COLOR, GL_ZERO); // Seperate allows you to set color and alpha functions seperately
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2f(0.0f, 0.0f);
glTexCoord2i(1, 0); glVertex2f(viewSize.x, 0.0f);
glTexCoord2i(1, 1); glVertex2f(viewSize.x, viewSize.y);
glTexCoord2i(0, 1); glVertex2f(0.0f, viewSize.y);
glEnd();
if(m_useBloom)
{
//m_bloomTexture.getTexture().bind();
sf::Texture::bind(&m_bloomTexture.getTexture());
glBlendFunc(GL_ONE, GL_ONE);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2f(0.0f, 0.0f);
glTexCoord2i(1, 0); glVertex2f(viewSize.x, 0.0f);
glTexCoord2i(1, 1); glVertex2f(viewSize.x, viewSize.y);
glTexCoord2i(0, 1); glVertex2f(0.0f, viewSize.y);
glEnd();
}
// Reset blend function
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
m_pWin->resetGLStates();
}
The line for m_pWin->setActive(); is something I added for debugging an earlier problem where the LightSystem would only draw a full-white quad across the screen (i.e. nothing was being drawn, or Texturing was disabled or something). I mention this because this is clearly where a context switch happens. Note however that
nothing related to the LightSystem is called before the loader finishes.