Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: Does sf::Context need to be in scope?  (Read 2326 times)

0 Members and 1 Guest are viewing this topic.

sharethis

  • Jr. Member
  • **
  • Posts: 65
    • View Profile
Does sf::Context need to be in scope?
« on: March 13, 2014, 09:30:57 am »
I recently implemented a worker thread in my application to upload large vertex buffers to the GPU asynchronously. Therefore, I first tried to just call sf::Context context; as first command in the thread, before entering a loop that calls functions where OpenGL calls are used inside.

void Worker()
{
    sf::Context context;
    while (running) {
        Buffer(); // uses OpenGL calls
    }
}

But I got undefined results. The uploaded meshes were drawn distorted, completely empty, or correct sometimes. So I moved the context creation inside the Buffer() function and that solved my issue.

However, it might be slow to create a new context every time the function gets called, where I only need one. Do I have the pass context object as function parameter to be visible in scope of the function body? Why can't I used it as I first intended?

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
Re: Does sf::Context need to be in scope?
« Reply #1 on: March 13, 2014, 09:53:21 am »
You must definitely declare it outside the loop. Maybe you just missed calls to glFlush() to synchronize the uploaded data with the other threads contexts'?
Laurent Gomila - SFML developer

sharethis

  • Jr. Member
  • **
  • Posts: 65
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #2 on: March 14, 2014, 02:12:37 am »
This is another issue, I still have. The uninitialized buffers gets drawn for one frame, before the data is fully uploaded. I already tried using glFlush() and even glFinish() without any effect.

However, when I move sf::Context context; out of the function and place it as first command in the worker thread, the behavior is more random. As I said the buffers were sometimes correct, sometimes empty and sometimes completely distorted. And that not for just one frame but until I updated them again, which resulted in one of the three cases again.

Do you have any idea how I could debug this issue?

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
Re: Does sf::Context need to be in scope?
« Reply #3 on: March 14, 2014, 07:29:17 am »
I have no idea, sorry :-\
Laurent Gomila - SFML developer

wintertime

  • Sr. Member
  • ****
  • Posts: 255
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #4 on: March 14, 2014, 02:05:19 pm »
Seems you are using the buffer before its uploaded. When a call returns it most of the times only means it has buffered a work item for the hidden OpenGL thread, not that its already done all work. glFlush is like saying, stop buffering up all work and actually start doing it (though you still dont know when it is completed).
If you want to be safe you probably need to put glFinish (or use some fences and check for when something is ready, to avoid having that thread wait till the whole queue drains) and only after that communicate to the main thread that the buffer is usable. Or take a slight gamble, choose some x and wait x+1 frames  before using it while assuming it wont buffer more than x frames of commands.

I think the easiest thing would be to restrict OpenGL calls to a single thread and let the worker threads only do things like loading from disk and unpacking, then inform the main thread of needing to make the needed buffer call for gpu upload.

sharethis

  • Jr. Member
  • **
  • Posts: 65
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #5 on: March 15, 2014, 02:56:00 pm »
Thanks for your advice. Please read my second post. I already tried using glFlush() or even glFinish() but that didn't help either. Is has something to do with how SFML creates the second OpenGL context, I think.

wintertime

  • Sr. Member
  • ****
  • Posts: 255
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #6 on: March 15, 2014, 08:37:57 pm »
I know you tried to call glFinish, but from what you wrote you did not communicate that this happened back to the other thread before using the buffer.

sharethis

  • Jr. Member
  • **
  • Posts: 65
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #7 on: March 16, 2014, 11:26:56 am »
That's right. I called glFinish() from the worker thread after glBufferData() calls to wait for the buffers to be uploaded completely. After that I swap the buffer ids of the model instance that the main thread reads. So it should use the old buffer until then. Here is a short code example to illustrate.

struct Model {
    GLuint positions, normals, texcoords;
};

// worker thread
void Load(Model &model)
{
    Model loaded;
    // generate buffers and fill them from disk
    // ...

    // wait for buffer upload to finish
    // main thread shouldn't be affected
    glFinish();

    // swap buffers
    Model old = model;
    model = loaded;

    // delete old buffers
    glDeleteBuffers(1, old.positions);
    glDeleteBuffers(1, old.normals);
    glDeleteBuffers(1, old.texcoords);
}
 

wintertime

  • Sr. Member
  • ****
  • Posts: 255
    • View Profile
Re: Does sf::Context need to be in scope?
« Reply #8 on: March 16, 2014, 01:32:01 pm »
Thats so wrong. What can happen is the main thread reads the id values, starts using 1, 2 or 3 of the buffers, then you change the struct, maybe the main thread uses 1 or 2 old and 1 or 2 new buffers, you delete the buffer that is in use while its still being drawn from or maybe even before its bound, then the main thread will try to bind, use or unbind the deleted buffer or never unbind it depending on if it saved the id.
At least you need a mutex to protect from this, but its better to avoid shared data and only send a message through a suitably threadsafe message queue to the main thread with the id numbers of the new buffer and then let the main thread remember which buffer to use and delete its buffers itself on a time where it does not mess up.