There's still something that I don't understand though. The idea behind all these examples is that you have a large amount of samples queued, and suddenly want to play new samples immediately, so basically you want to discard the data which is buffered and replace it with the new samples. Am I right?
Not quite. Discarding data/changing buffers after the fact is a bad idea, if it works at all. But reducing the buffer length, will automatically result in lower latency, without any discarding. But there's really nothing you can do SFML-wise as OpenAL simply doesn't support the kind of streaming required (the fact that streaming itself is only possible with polling is a sign of a really really bad API design if you ask me... wonder what the designers of OpenAL were thinking)
First, you have valid data queued and ready to be played, so if you want to get rid of it you must tell the audio system to do so (music.Stop() for example). Secondly, this is not what a sound stream is supposed to do. Streams are continuous flows of data. All the data that you enqueue is supposed to be played, it's like a network connection, what you send is received, you cannot suddenly remove data which is in the pipe and restart with something new.
What does a synthesizer or a continuous stream of dynamically changing music produce? Not individual sounds which can be played one after the other (or just stopped in the middle), but a continuous stream of samples. Individual 20µs samples. Streaming is the
only option. Not via OpenAL though, since it isn't designed for low-latency streaming.
So my conclusion is that to do what you describe, I wouldn't use a SFML sound stream at all, this looks totally different to me.
Sorry if there's still something that I misunderstood
I wouldn't use SFML sound stream either, but a portaudio device, which also streams but with a single, small buffer. Still a stream though