maybe even throw a bit of sf::JustinBieber in the hell level]maybe even throw a bit of sf::JustinBieber in the hell level
Hehe.
The discussion about sf::Sound/sf::Music is quite interesting. I'd like to propose to merge sf::Sound and sf::Music into one class, e.g. sf::Audio. GraphicsWale mentioned the LÖVE2D example, and I think they are going into the right direction.
Take a look at the APIs of sf::Sound and sf::Music, and you'll see that they are very similar to each other. But more importantly, they do the same thing: Loading and playing audio samples. If those samples are streamed or buffered is at most an implementation detail and class property.
However, thinking more about it, and also in relation to the sf::Window/sf::Graphics thing (which I completely like, because it would probably also mean that context creation is explicit rather than implicit, unlike currently), the case for sf::Audio could be incredibly easy to understand and use:
sf::SoundStream music_source("music.ogg");
sf::SoundBuffer sound_source("effect.ogg");
sf::Audio background_music(music_source);
sf::Audio effect(sound_source);
For sf::Sound/sf::SoundBuffer, it already works exactly like in the code above. For sf::Music, it could work similar. This does greatly split areas of responsibility and thus enhance modularity: sf::Audio shouldn't be bothered with the source. It's just the class that takes samples from an arbitrary input and processes them for playback.
We'd be left with one class that plays any audio, and two classes for possible sound sources. As both are sources, different names than "SoundBuffer" and "SoundStream" (e.g. AudioBuffer, AudioStream, StreamedSource, BufferedSource, BufferedAudio, StreamedAudio) might be chosen.