Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: How to get rid of memory leak with a render thread inside a thread?  (Read 1503 times)

0 Members and 1 Guest are viewing this topic.

brodajarek3

  • Newbie
  • *
  • Posts: 3
    • View Profile
I want my rendering to be multithreading. Also I want window events to run simultaneously with rendered things. I do not know much about sfml, so the first thing that I came into was to create a new thread that runs in parallel with the event loop like this:

void renderingThread(sf::Window* window)
{
    while (window->isOpen())
    {
       
    }
}

int main()
{
    sf::ContextSettings contextSettings;
    contextSettings.depthBits = 24;
    contextSettings.majorVersion = 4;
    contextSettings.majorVersion = 6;
    contextSettings.attributeFlags = contextSettings.Core;

    sf::Window window(sf::VideoMode(640, 480), "SFML window with OpenGL", sf::Style::Default, contextSettings);

    window.setActive(false);

    std::thread t1([&window]() {
        renderingThread(&window);
    });

    while (window.isOpen()) {
        sf::Event event;
        while (window.pollEvent(event)) {
            if (event.type == sf::Event::Closed)
                window.close();

            if ((event.type == sf::Event::KeyPressed) && (event.key.code == sf::Keyboard::Escape))
                window.close();
        }
    }

    t1.join();
    return EXIT_SUCCESS;
}
 

Now I have to create another 2 threads - the first one to queue/push gl commands and the second one to dequeue/pop/actually run gl commands. To simplify this I create a one new thread that just color and clear default frame buffer. It is clear that I have to active context and swap windows/buffers, so methods
Code: [Select]
window->setActive(true); and
Code: [Select]
window->Display(); are called.

void renderingThread(sf::Window* window)
{
    while (window->isOpen())
    {
        std::thread thread([&window]() {
            window->setActive(true);

            glClearColor(1, 1, 1, 1);
            glClear(GL_COLOR_BUFFER_BIT);

            window->display();
        });
    }
}
cpp

When I run this code, memory rises every second. `window->setActive(true);` is the problem.
How can I fix this problem and achive similar effect?
Do I do something wrong? How are more advanced systems implemented?

To visual a bit more what I'd like to achive I show here a pseudo code
void renderingThread(sf::Window* window)
{
    std::atomic_bool simulationDone = false;
    while (window->isOpen())
    {
        std::thread simulationThread([]() {
             //Updatelogic

            //it's a thread-safe queue
            //queue up gl calls
            renderQueue.push(new CmdClearColor(1, 1, 1, 1));
            renderQueue.push(new CmdClear(COLOR_BUFFER_BIT));

            simulationDone.store(true);
        });

        std::thread renderThread([&window]() {
            window->setActive(true);

            while(!(renderQueue.empty() && simulationDone.load())) {
                Cmd* cmd = nullptr;
                if(renderQueue.pop(cmd)) {

                    //do gl calls
                    cmd->DoCmd();
                }
            }

            window->display();
        });

        simulationThread.join();
        renderThread.join();
    }
}
 
« Last Edit: August 03, 2019, 01:59:45 am by brodajarek3 »

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32498
    • View Profile
    • SFML's website
    • Email
Re: How to get rid of memory leak with a render thread inside a thread?
« Reply #1 on: August 03, 2019, 08:16:50 am »
You're spawning new threads at every iteration of your main loop, so tenths of times per second. This is going to eat all your CPU power just for context (both system and OpenGL) switching.

Spawn one thread for each of your jobs, once at init time, and then let them run endlessly, using (any kind of) synchronization to exchange commands/data between them. That doesn't explain the memory leak, but that certainly will solve it.

But I really doubt you'll gain anything with this strategy. OpenGL is fundamentally single-threaded, so keeping things simple is often the most efficient choice.
Laurent Gomila - SFML developer

brodajarek3

  • Newbie
  • *
  • Posts: 3
    • View Profile
Re: How to get rid of memory leak with a render thread inside a thread?
« Reply #2 on: August 03, 2019, 11:46:25 am »
Quote
You're spawning new threads at every iteration of your main loop, so tenths of times per second. This is going to eat all your CPU power just for context (both system and OpenGL) switching.

Spawn one thread for each of your jobs, once at init time, and then let them run endlessly, using (any kind of) synchronization to exchange commands/data between them. That doesn't explain the memory leak, but that certainly will solve it.

Yes, I know that I should not create new theads every iteration, I am not a total noob in this topic, but just wanted to write it as fast as possible and go to bed.

Quote
But I really doubt you'll gain anything with this strategy. OpenGL is fundamentally single-threaded, so keeping things simple is often the most efficient choice.

So, it is a no-go with a multithreaded renderer? :/

I do not have a problem with programming graphics on a single thread/main thread and just wanted to learn how to do this multithreading. I have already know about synchronization techniques and wanted to do some stuff with this knowledge.
« Last Edit: August 03, 2019, 11:47:58 am by brodajarek3 »

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32498
    • View Profile
    • SFML's website
    • Email
Re: How to get rid of memory leak with a render thread inside a thread?
« Reply #3 on: August 03, 2019, 05:53:07 pm »
Quote
So, it is a no-go with a multithreaded renderer? :/
You can have a rendering thread that runs at constant 60 FPS, and a logic thread that simulates the world / physics / IA / whatever. This is a common pattern.

What you did, issuing GL commands in the simulation thread and processing them in another one, I don't think it really makes sense. The rendering thread should take a list of entities to draw, not low-level rendering commands.
Laurent Gomila - SFML developer

brodajarek3

  • Newbie
  • *
  • Posts: 3
    • View Profile
Re: How to get rid of memory leak with a render thread inside a thread?
« Reply #4 on: August 03, 2019, 07:56:29 pm »
Quote
What you did, issuing GL commands in the simulation thread and processing them in another one, I don't think it really makes sense.

I do not have experience with multithreaded rendering. There are not many examples on the internet. I have recently found http://xdpixel.com/how-a-multi-threaded-renderer-works/ and I thought it'll be good to implement something smilar to this.

Quote
You can have a rendering thread that runs at constant 60 FPS, and a logic thread that simulates the world / physics / IA / whatever. This is a common pattern.
Yes, I know this pattern.

Quote
What you did, issuing GL commands in the simulation thread and processing them in another one, I don't think it really makes sense. The rendering thread should take a list of entities to draw, not low-level rendering commands.
Yes, you're right, but IMO it is always better to test thing with a simple example and then move on some more advanced/outside things, implementation, that are higher level.