Why did you remove a call to glFlush() from Font.cpp, and added one to Texture.cpp?
Strictly speaking, Font isn't a GlResource so it shouldn't make GL calls itself. It owns Textures, so it can rely on them calling glFlush() when their texture data actually gets modified. Also, there was a glFlush() missing from one of the update()s in Texture, so I added it for consistency.
Threads with no active target no longer create their own context, but share the same one; won't it add too much overhead to constantly synchronize GL calls and activate/deactivate the shared context between these threads?
In a realistic scenario, users either perform rendering on secondary threads (which I always recommend against anyway) or use them to offload some heavy resource loading. In the former case, there is already an active context on that thread, whereas in the latter case, most of the time will be spent "doing the other stuff".
Deactivating a context on a thread doesn't do anything else besides flush the context on that thread before being deactivated. Flushing a context is a relatively cheap operation. If we promise the driver that we try really hard to make sure GL operations are already synchronized in our own code, it will have less work to do itself, even if we constantly activate and deactivate contexts. This also allows users to optimize their own code more when they understand that parallel GL operations don't really benefit them in most cases.
Almost all the GL calls in SFML are asynchronous, meaning they return right away. I tried to minimize the time the shared context would be locked (if it even gets used in the first place), so I am fairly confident that there won't be much contention even when GL operations are being performed on multiple threads all using the shared context. We can run benchmarks, but I would be surprised if there was any noticeable performance degradation because of this.
Why don't you create a PR?
All in good time.