Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - LagEvent

Pages: [1]
1
Graphics / Re: Random spikes in time taken to draw objects
« on: March 20, 2013, 09:29:10 pm »
Caching of shader uniform locations and a proper FBO implementation could improve performance in extreme situations such as mine. I'm also looking into MRT and increased performance on VBO's, which would probably have made SFML more cumbersome to use for the majority of users.

For the most part I just like to have control over everything. For instance, I have to constantly swap between yours and my own implementation of Vector3. I've also written a few wrappers around your classes because some private members aren't exposed, it just feels silly. I'm unable to resist the evils of premature optimization.

2
Graphics / Re: Random spikes in time taken to draw objects
« on: March 20, 2013, 08:15:51 pm »
Since my last post I've switched over to pure OpenGL. I also found this post. It suggests that when vsync is disabled, the CPU overloads the GPU with information, which have to catch up every second. I tested this out, but I'm sure this is false. There was also one guy mentioning that ProcessExplorer was affecting the game. I remember back when I was working with XNA, the refresh rate of Task Manager was influencing my game. I started killing background applications and once I killed f.lux, my eyes hurt, but the largest lag spikes went completely away. In the end, the lag spikes was caused by external software, not SFML or graphics drivers. Although I'm still leaving SFML due to various other issues and limitations.

3
Graphics / Re: Random spikes in time taken to draw objects
« on: March 18, 2013, 05:24:33 pm »
How is that convenient?

4
Graphics / Re: Random spikes in time taken to draw objects
« on: March 18, 2013, 07:58:49 am »
Quote
Prior to this extension, it was the window-system which defined and managed this collection of images, traditionally by grouping them into a "drawable".  The window-system API's would also provide a function (i.e., wglMakeCurrent, glXMakeCurrent, aglSetDrawable, etc.) to bind a drawable with a GL context (as is done in the WGL_ARB_pbuffer extension).  In this extension however, this functionality is subsumed by the GL and the GL provides the function BindFramebufferEXT to bind a framebuffer object to the current context.
Source: https://www.opengl.org/registry/specs/EXT/framebuffer_object.txt

While the old pBuffers did indeed require additional glContexts for each render target, the new FBO's does not require this. This was something I suspected, but was first confirmed to by in this post while looking into multiple render targets for my deferred rendering engine.

I'm thus led to believe that SFML's implementation of FBO's is somewhat flawed. As this excellent tutorial shows, in addition to creating and binding the FBO's upon creating, you need to bind the framebuffer upon drawing. Instead SFML activates another context. I wrote a small test application to test my hypnosis and made some small modifications to SFML.

int main(int argc, char* argv[]){
    sf::Vector2i s(1280,720);
    sf::RenderWindow window(sf::VideoMode(s.x, s.y), "test");
    sf::VertexArray va1; va1.setPrimitiveType(sf::Triangles);
    sf::VertexArray va2; va2.setPrimitiveType(sf::Triangles);
    for (int i = 0; i < 10*3; ++i) va1.append(sf::Vertex(sf::Vector2f(rand()%s.x, rand()%s.y), sf::Color(255,0,255)));
    for (int i = 0; i < 10*3; ++i) va2.append(sf::Vertex(sf::Vector2f(rand()%s.x, rand()%s.y), sf::Color(0,255,0)));
    sf::RenderTexture target; target.create(s.x, s.y);
    while (true){
        sf::Event e; while (window.pollEvent(e));
        sf::Clock c; while (c.getElapsedTime().asMilliseconds() < 16);
        va1[rand()%30].position = sf::Vector2f(rand()%s.x, rand()%s.y);
        va2[rand()%30].position = sf::Vector2f(rand()%s.x, rand()%s.y);
        target.clear(sf::Color(128,128,128));
        target.draw(va1);
        target.display();
        window.clear(sf::Color(255,0,0));
        window.draw(sf::Sprite(target.getTexture()));
        window.draw(va2);
        window.display();
    }
    return 0;
}
 

Does as you would expect by drawing a lot of silly triangles in crazy colors. I then tried to remove the context from the FBO implementation.

@Graphics\RenderTextureImplFBO.cpp
 67    //delete m_context;
 87    //m_context = new Context;
134    return true; //return m_context->setActive(active);
 

But that just broke everything and made the application just print a black screen. I then tried to fill out the rest of the missing code.

@Graphics\RenderTextureImplFBO.cpp
134    glBindFramebuffer(GL_FRAMEBUFFER, m_frameBuffer); return true;
@Graphics\RenderWindow.cpp
 67    glBindFramebuffer(GL_FRAMEBUFFER, 0); return setActive(active);
@Graphics\RenderTarget.cpp
 58    resetGLStates();
 

And magically the program works again as first intended. This seems to drastically stabilize the FPS and reduces the lag spikes, although they are still present. It's ugly, but it seems to work. I can't seem to fully identify the root cause for the issue, but I'll probably look into it once again at a later time.

5
Graphics / Re: Random spikes in time taken to draw objects
« on: March 16, 2013, 10:35:56 pm »
In fullscreen mode you actual get the full capabilities of the GPU.

Well, no. SFML seems to run in some sort of windowed fullscreen mode, so I'm able to pull up windows in front of my game. In addition, I have multiple monitors active with Windows Aero, HD video streams and even games running. SFML does not take exclusive control over the video card. I fully understand that there can be a performance increase when taking exclusive control over the video card. I can even get around the idea that you can get better performance in windowed fullscreen. However, I'm not talking about a significant increase in FPS, it's a decrease in some sort of regular resource stalling every second. Hence I'm very interested to see if/how these problems would manifest themselves on linux.

6
Graphics / Re: Random spikes in time taken to draw objects
« on: March 16, 2013, 08:35:17 pm »
I've been able to confirm that the lag spikes are equally present on AMD hardware. Since it's clearly not hardware dependent, I was about to test out DirectX once again. As I've already said, I also had some similar, but irregular issues back when I was programming in XNA. Since then I've changed to larger monitors, which have resulted in more windowed gaming, including game development.

I took a gamble and put in that sf::Style::FullScreen, and to my honest surprise, the lag spikes was reduced to a fraction of what it once was. In some cases it even disappeared completely. I tried to in windowed mode again, but the spikes came right back, even with the option "disable desktop composition". I was aware you could theoretically get better performance in fullscreen, but this issue is just becoming weirder.

I consider testing it out in linux, but none of my servers have X installed.

7
Graphics / Re: Random spikes in time taken to draw objects
« on: March 12, 2013, 06:09:11 am »
Hi, I'm TheEvent from this post. I'm a really impatient person, so when I didn't feel that anyone was listening I simply rage quit. Ironically that made Laurent look somewhat insane, repeatedly speaking to himself, I apologize for that.

Anyway, I have a long history with this exact issue. I'll try to write up a summary of my experiences, but it won't be short.

I first noticed this way back when XNA 2.0 was just released. My very crude game didn't flow smoothly. I graphed the FPS which confirmed my observations, but at that time I eventually blamed Microsoft and moved on to open source. I then stepped up my game by shooting my self repeatedly in my foot by using C++ and Ogre3D. Again I noticed some irregular FPS. I did some troubleshooting, but because of my custom game loop and DirectX/OpenGL cross platform, debugging was hard for me.

At one point I got tired of 3D because of the insane time sink related to content creation. I started researching pretty 2D games with deferred rendering. I did some prototyping in XNA 4.0, with poor performance, but continued refining the techniques with SFML. What can I say, I'm stupid enough to obsess about premature optimization.

So, the problem actually consists of several more complicated issues. The sudden decrease in draw time after 1400 frames are probably due to nVidia, but can be further provoked though SFML.


More specifically, nVidia has a well hidden and sparely documented feature called threaded optimization. On computers with multiple CPU's, which is most gaming computers today, the nVidia drivers can offload some processing to another CPU core. You should think that disabling this feature will fix the problem, or even disable hyper threading, but that actually just makes matters worse. This feature actually significantly increases when enabled, but does so in a staged manner. As the image above shows, the draw time significantly decreases at one point, but in rare occurrences it may actually increase again.


There are two interesting observations between these two or three stages. First of which is the CPU usage. On a dual core CPU, the load% will usually immediately fall from 100% to 50%. Freeing up almost an entire core, even though the FPS increases. However, this stage is not the same as disabling the threaded optimization feature. Which brings me to this picture.


This is a cut out from Intel Vtune which shows cross thread communication between threads, where the first one represents my game, and the second one is the nVidia OpenGL driver nvoglv32.dll. I've used a lot of shaders and FBO's in my game, and upon further investigation the OpenGL calls which begin with "glGet" actually requests information from the GPU. This messes up with the timing, because the GPU's are usually designed and optimized for one way communication. This causes the SFML library to busy wait upon calling such methods such as sf::Shader::setParameter and sf::Texture::update. I've modified my local copy of SFML to work around this problem, as shown in this github pull request. There is an open similar issue here.

So SFML is partly to blame for the issues, but I also have two good reasons why not to blame SFML. I've tried directly and exclusively using OpenGL, where some, not all, of the issues remain. In addition, while I don't have any images from ATI hardware, the issue still remains there. The ATI eqivalent atioglxx.dll have similar behaviour to that of nVidia, although nowhere near as bad. To be completely safe, I have this little trick somewhere in my copy of the SFML source code, which forces the event to trigger 99% of the times I start my game.
Code: [Select]
GLint maxunits;
for (int i = 0; i < 10000; ++i)
    glCheck(glGetIntegerv(GL_MAX_TEXTURE_COORDS_ARB, &maxUnits));

At this point I was satisfied with the performance, until I started to get really bothered by two related issues. So the second one is that, once the workload of the GPU surpasses that of the CPU, the benefits of the nVidia threaded optimization suddenly disappears and the FPS tanks a lot. Now there isn't really a lot I can do with this, but it causes some irregular behaviour when you're floating around the point of even GPU/CPU workload.

This brings me to the last issue that I'm aware of, the random, but regular spikes. I noted this issue when I first posted my last post as shown in this image.


At that point, I hadn't paid too much attention to that, but now it's really becoming an issue for me. I currently play around with a lot of FBO's. While my FPS is somewhat stable 300+, I get these spikes of 10-40 milliseconds. That's around 30 FPS, which is really noticeable. Now today I just traced this back to it's source, which is why Google found this thread for me. It has to do with the glContext. As you may know, each FBO have their own sf::Context. Upon calling sf::RenderTarget::clear or draw, eventually you'll get down to WglContext::makeCurrent and wglMakeCurrent, which is the root cause of the huge lag spikes. There could be other causes as well, but I've yet to work around this issue. If there is any way to implement an FBO without using additional Contexts, I'd like to know.

Pages: [1]