After a bit of reading and a lot of good will, I come to the conclusion that the majority of nVidia users report that they are better off disabling this "Threaded Optimization" completely. Why this is called optimization? Nobody knows. Allegedly it is supposed to magically make older single threaded games run faster on multi-core CPUs? This comes at the cost of breaking just about everything else, including SFML as you have experienced. I wouldn't mess around with detecting whether an nVidia GPU is present or not and setting stuff differently. It just makes it a headache later on when you are testing stuff that relies on this split behavior. As I always like to say: keep it simple, fast and portable.
In case you are really really keen on finding out whether the user is using an nVidia GPU and are willing to break everything there is to break:
#include <SFML/Graphics.hpp>
#include <SFML/OpenGL.hpp>
#include <cstring>
#include <iostream>
int main() {
sf::Context context;
const char* vendor_string = reinterpret_cast<const char*>( glGetString( GL_VENDOR ) );
if( strstr( vendor_string, "ATI" ) ) {
std::cout << "You are using an ATI graphics card!\n";
}
return 0;
}
Just change the ATI to nVidia or whatever works for you. I have an AMD card so couldn't test myself. As always with these kinds of things, YMMV.