After analysing the uncompressed video frame by frame, I have come to the conclusion that frames
aren't being dropped. A frame just happens not to be synchronized, and delayed to the point it gets swapped right before the next one does, leading to the appearance that a frame has been skipped. It is also very uncommon for GPUs to simply drop frames when not under extreme loads.
It might sound strange at first, but this might be indirectly caused because your Nvidia GPU is simply
too fast compared to the other cards. You see... with vertical synchronization enabled and fullscreen, you are asking your GPU to try its best to make sure a new frame can always be presented at the verticial synchronization signal. If your application is producing too many frames, the driver throttles it by blocking the buffer swap, if it is producing too few... it often just doesn't swap at all, leaving the previous contents in the front buffer.
Now consider, what I just said with having too many or too little frames works out nicely if you produce around a multiple or fraction the amount of frames compared to the vsync rate. What if you produce something in between, that doesn't fit so nicely? Where does the driver draw the line and say "sorry frame... you are a bit too late" or "sorry frame... you will have to wait for the next refresh"? This is very driver and hardware dependent, but it might also depend a bit on your code.
Stuttering is often naively attributed solely to the GPU or the OS, because of vsync and the frame limiter. What people seem to forget, is that
vsync and the frame limiter do not guarantee a constant framerate. They are just hints or basically "nice to haves" that the responsible system is to give their best effort to fulfil. If you write code that depends on having a constant framerate, sorry to be the bearer of bad news, but it simply won't work. You haven't posted any code whatsoever, so based on what I've seen on this forum recently and in the past, I cannot exclude this possibility. If you have thought of this already, then you can ignore what I just said, but showing a bit of code would still be helpful.
I am using a SFML 2.1 stable release which I would have downloaded probably >6months ago.
Not only was it downloaded > 6 months ago, it is almost 1.5 years old, and many fixes behind the current master.
Given the different behaviour on different cards, maybe it is an NVidia OpenGL driver problem.
Perhaps... but drivers have gotten better and better over the years, and I think such a prominent "issue" would have been fixed long ago, since this would have affected any OpenGL application. Nvidia is also known for their efforts in supporting the latest OpenGL specifications, so I can't really say that they don't give enough effort in regards to OpenGL support.
Has anyone else experienced this kind of issue with SFML?
Like I said above, if the code makes that wrong assumption, then yes, people have experienced this issue, even on AMD and Intel cards. If you want people to be able to test on their own systems, you will have to provide code or a binary at least.
Is NVidia known to have crappy OpenGL drivers?
Crappy is always relative. Compared to AMD, probably not. They seem to fix glaring OpenGL issues faster than AMD does from what I've heard. Since drivers are also tied closely to the operating system, we can't rule that out as the cause. It might very well be that the proprietary Nvidia drivers perform better running your application from inside Linux than in Windows.
Have there been any recent changes to SFML that might address this issue?
Like I said above, the SFML you are using is more than 1.5 years old. I think it is safe to say that there have been a
large number of fixes for various issues. How many of them might have an effect on this issue? I cannot say...
It appears to be an OpenGL-specific problem. Is it possible it is because SFML uses old/legacy OpenGL?
How and why do you come to this conclusion? If you assume that because some other graphics library does not display this same behaviour, then that is merely a
necessary condition to state that it is OpenGL specific, but not a sufficient one. Take into account that the code for both tests cannot be identical as well, and you have too many factors to be able to draw a reasonable conclusion.
It is true that SFML uses legacy OpenGL code, but I don't know where you got the misinformed idea from, but it does not have
any impact on whether stuttering might occur or not. Legacy OpenGL was deprecated because it was a bottleneck, not because there were more serious problems that led Khronos to run away from it. In fact, it is still supported by vendors as much as non-legacy OpenGL is, due to the simple fact that industrial CAD applications might still depend on it to function.
Any ideas on how to fix it?
Knowing what the actual problem is comes before trying to find a fix for it...
My biggest concern is that this problem will affect all, or even a significant portion of NVidia users.
You haven't even ruled out that it might not happen on certain AMD or even Intel cards. Again... necessary condition.
But *if* the problem is OpenGL specific, and there is no work around, I may have to consider migrating to DirectX-based graphics middleware. Unless maybe SFML plans to support a DirectX back-end.
I really do not understand what is it with those that flock to DirectX as soon as something doesn't work as expected with non-DirectX software. Do you ever hear of people saying "DirectX doesn't do this right... I'm going to use OpenGL now."? Not even close to as much as the opposite. This can probably be considered a lesser manifestation of
shotgun debugging, and Microsoft is obviously happy that many misinformed people still think like that. Before transitioning to another library, or even different code using the same library,
understand what the problem is and why the change might make a difference. Simply saying "I don't know how that fixed it, but all I care about is that it
seems to be fixed." isn't considered good software development practice, and often is the result of badly trained or lazy developers.
I can't speak for every game developer out there who currently uses DirectX, but if any of them truly believe the marketing material that Microsoft has put out over the years about DirectX which has been debunked time and again from objective observers, then I can only hope that they will one day wake up and realize that OpenGL is just as worthy of a graphics library as Direct3D is. Whoever still believes that Direct3D 12 is "more powerful" than OpenGL 4.5 doesn't seem to know enough about OpenGL 4.5 if you ask me.
They are equally powerful, and can produce the exact same results, since operations in Direct3D and OpenGL get translated into the exact same instructions on the GPU anyway. Contrary to what naive laypeople might think, there are no "special Direct3D 12 hardware units" on the GPU that only Direct3D has access to. And in the recent years, GPGPUs have become so universal, that there isn't really much else one can add in terms of "specialized functionality". What you could only do in a software renderer decades ago you can do using dedicated hardware now.
Every new iteration of Direct3D and OpenGL is thus focusing primarily on performance improvements. But of course, Microsoft likes to show their customers pretty pictures of what they claim is only possible using the latest API on the exact same hardware.
Please, don't be one of these people. I still have hope that you can rise above that.
And before more people ask. No, Direct3D support in SFML will not be coming before SFML 3. Even if we decide that it is doable, I am certainly not going to invest my unpaid time promoting a graphics library that has done nothing but mislead and stab people in the back time and again. When it comes to open source development, I am strongly driven by my morals, and Microsoft has violated them in too many ways already.