Warning: Wall of text, as usual.
Because it makes everything much more complicated for very little gain. SFML was never meant to be one of those multi-API rendering engine, just a wrapper on top of a low-level rendering API that already works well on all the target platforms.
You have to define what you consider "well" here. I think we've made our lives too easy for too long by labelling any use case that doesn't perform "well" as "out of SFML's scope". If something is possible and the user demonstrates that it works in principal, they have every right to do so as long as it doesn't violate behaviour described in the interface documentation.
It is true that there are different classes of applications that can be developed with SFML, some of them fit the library perfectly, and some others less so. People who are familiar with SFML will probably already have a broad idea of what these classes could be, e.g. "Tetris-clone", "Action platformer", "Voice recorder" etc.
When someone comes along who isn't so familiar with the library and does something that doesn't fit into the usual pattern, after 1 month of effort to get a somewhat working demo together, they come to the forum and all we tell them is "This isn't a typical use case of SFML, you are better off using something else.", most often "Just use raw OpenGL, it's faster for what you are doing".
Is this really the direction we want to go in? Why do we have to explain to people that there are things that you can do optimally with SFML and others that are not optimized at all? This isn't even mentioned in the API documentation. As a library that abstracts many low-level concepts away for the user, shouldn't we strive to provide an optimal solution for everything that the user is allowed to do? Either that, or we tell them early enough that some things are implemented in a suboptimal way and they should avoid using them in their use case. I prefer doing the former, it is more intuitive and makes the library more attractive to a wider audience.
I've seriously never seen any valid point in favor of a DirectX or Vulkan back-end, except that "it would be cool". People are not even supposed to care about the underlying API -- who cares that sfml-audio uses OpenAL?.
DirectX.... well.... if you ask me, there is no valid point in favour of it, especially now that Vulkan is here.
Vulkan on the other hand, is designed to provide a superset of the functionality that OpenGL provides. Simply put, there is nothing that OpenGL can do that Vulkan can't, whereas there are many things that Vulkan can do that OpenGL can't. Vulkan was also designed to be implementable on a wider range of platforms than DirectX or OpenGL ever was (this is why OpenGL ES was necessary). As such, going forward, if Vulkan sees its expected adoption, it could be the only API that would need to be supported to target any and all future platforms (possibly including consoles).
I've followed this thread since it started, and some of the most important points still haven't been mentioned.
1. Vulkan is consistent, even across vendors and/or platforms.
This means that in theory, all conforming implementations should deterministically produce the same framebuffer contents given a specific input. This means, if SFML were to support Vulkan e.g. on Windows and Linux, there would be absolutely no perceivable graphical difference when you are running in fullscreen. As everybody should know by now, this is hardly ever the case when writing for both platforms using the OpenGL backend. With Vulkan, testing on both platforms is still necessary, but the number of adjustments that have to be made would be kept to a minimum.
Just to give you a feeling for how strict Khronos is with their conformance tests, this is a list of products that have passed:
https://www.khronos.org/conformance/adopters/conformant-productsOne might notice, there are no AMD products there... yet. This is both a good and a bad sign. Good because it is proof that we are guaranteed that we no longer get bogus implementations from the IHVs before they even make sure that they function correctly. Bad, because it's a sign that AMD is having difficulties as usual, considering as well that they have much less hardware (and therefore test effort) that supports Vulkan in the first place.
2. Vulkan is predictable.
Remember those times when you loaded those sf::Textures during a well-placed loading screen or even went so far as to offload texture loading to a separate thread? Remember the stuttering that occurred when you actually started to use those textures in your rendering? That is a result of the freedom that OpenGL implementations get. In fact they are so free, they are allowed to delay operations as long as they want, as long as it doesn't result in any externally visible effects (performance is obviously not covered by this). This means that quite literally, "magic" happens inside the driver. They try to be intelligent as much as they can, but whenever they guess wrong, you start to feel it. Even if you really know what you are doing and want to shout "Upload the textures NOW please.", OpenGL just doesn't care simply because it doesn't have to. Lately there have been more and more extensions that give developers more explicit control over the behaviour of the implementation, SFML however still uses the legacy API, which only makes matters worse. If this trend continues, one day once you have the extensions that actually provide you with 100% control, you will probably end up with something identical Vulkan.
In Vulkan, there is no more guessing. At least, no guessing that can lead to detrimental side-effects such as was the case in OpenGL. If you have looked at the API, everything is completely explicit. Any performance impacts will more than likely come from your own code instead of a sub-optimal Vulkan implementation. This is what developers want, no matter whether you write AAA games, or your 2D side-scrolling action shooter. No matter how simple your game is, players hate noticeable stuttering. This is probably why AAA developers have preferred Direct3D over OpenGL on Windows for so long, while Direct3D 11 and earlier exhibited similar problems, it was nowhere as bad as it was in OpenGL.
3. Vulkan doesn't force you to keep validation on, even when you really don't want it.
OpenGL, whether in your debug or release configuration, will ALWAYS validate whatever you do. This is a good thing, when it prevents BSODs and similar nasty effects. Kept to a minimum, it wouldn't be that bad. However, OpenGL overdoes it, by great lengths. Even if you know for certain that those 10 lines of OpenGL code can never fail, not even theoretically, there is no way to tell OpenGL to trust you and open the door for BSODs or nasty crashes. The only difference between a debug and release version of SFML in the eyes of an OpenGL debugger are all the glGetError() calls, that's it. There is no "optimized release version" of OpenGL, it quite literally is always in debug mode.
Vulkan on the other hand, makes validation
opt-in. Meaning, unless you really cry for it, you are getting nothing. If you forget that one call to bind your descriptor set without validation, be prepared for some "interesting" surprises. This is also the method of development that everybody is used to: develop and test in debug, with crappy performance and checks all over the place, release with all optimizations and no checks at all. Vulkan just follows what everybody would consider common practice.
4. Vulkan provides more information about what the hardware/implementation really supports, so alternate code paths can be chosen if they are more suited.
One headache that many advanced OpenGL developers might know of is: How do I determine what the best way to do something on hardware XYZ is? In OpenGL, there is only the notion of "Either an extension is supported, or it isn't". We don't get any information through the API about e.g. whether some brand new feature is actually supported on the hardware or merely emulated in the driver so that IHVs can stick that "Supports OpenGL 4.5" label on their rebranded cards. What you end up doing when using OpenGL is empirically testing performance on every single class of hardware you plan on supporting. On one family a certain call leads to 50% of the performance than on another family, even from the same IHV. This might lead to really absurd differentiation code, based on the GL_RENDERER string.
While the same can still be done in Vulkan, based on its advertised extensions, it is harder for implementations to "pretend" to support something that they don't truly support. And because Vulkan is so low level, it doesn't make sense to package code blobs as an extension inside the driver when the application could do the same themselves, probably even more efficiently.
5. Vulkan actually gave a bit of thought with their WSI (Window System Integration).
Unlike the horrible mess that has resulted over the last 20 years in OpenGL, Vulkan had the opportunity to provide a "simple" (relatively), and more importantly,
consistent API to connect to the diverse window systems on the platforms it supports. There is no WGL or GLX or EGL or EAGL or CGL or NSOpenGL nonsense to deal with, the implementation takes care of it for us and provides an API that encompasses everything you would expect to have to do, with a single unified API. All you have to take care of is getting a window and passing its platform specific handle over to Vulkan, the rest (pixel format, double buffering, multisampling) is uniform across all platforms. If you look at how SFML currently creates contexts, from a high-level, it's always the same process, just with completely different APIs for each platform.
TL;DR:
Vulkan basically takes everything OpenGL did wrong, and finally does it right. This lead to a more explicit, lower-level API, but that seems to be what veterans were crying for for a long time. Vulkan is what you get when hardware vendors and software developers sit on the same table to talk about the API they use to communicate with each other. One might think: That makes sense, wasn't it always like that? The answer is: Until now, not really... Everybody was part of Khronos but they never really sat at a table all at once, probably because of political reasons and because there was no drive for something new, until Mantle and DirectX 12 showed up.
SFML doesn't suffer from the things OpenGL did wrong
too often, but it does suffer every now and then. If SFML would have a proper, real, supported Vulkan backend while maintaining its current API, there would be nothing left for people to complain about (as long as they target Vulkan-supporting hardware that is).
Multiple backends for the platform, on the other hand, are mandatory for SFML to run on these platforms. So this is not really comparable.
Of course it's not comparable, because a single Vulkan implementation would make these multiple, divergent backends redundant even across multiple platforms. As soon as Vulkan class hardware is broadly available (i.e. 2-3 years from now), there will only be benefits in supporting Vulkan as a "main rendering backend" whereas OpenGL would be relegated to "legacy rendering backend". Fixes would target everything at once and as I mentioned in my wall of text, the user experience would be consistent across all platforms as well.
In my opinion, SFML really should support only 2 APIs, OpenGL and Vulkan. OpenGL, as mentioned, for legacy hardware and Vulkan for hardware that supports it (basically everything from 2 years ago and onward). They both share in common that they target many more platforms than e.g. DirectX does, so a DirectX renderer would make almost no sense if OpenGL and Vulkan renderers were available.
I want to get to the point where I can have a good feeling being able to tell people: "You want your SFML game to run faster? Get a better GPU." Right now, you will barely see any improvement over a decent 3-year old GPU even if you upgraded to a GTX Titan X because OpenGL is single threaded and SFML renders using the legacy API, which means: CPU bottleneck,
from rendering.