Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: SoundStream Latency  (Read 17728 times)

0 Members and 1 Guest are viewing this topic.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32498
    • View Profile
    • SFML's website
    • Email
SoundStream Latency
« Reply #15 on: January 13, 2011, 11:00:42 pm »
Quote
To start playing, a buffer first needs to be filled, that's the whole reason, and also why latency can never be lower than the sound cards minimum buffer size.

Ok but it's very fast to fill, I still don't see why the duration of the buffer equals the latency.

Quote
Using multiple buffers is basically the same as one big buffer in regards to latency, as before the third buffer starts playing, the first needs to be filled (42.6ms), the first needs to play while the second is filled (42.6ms) and then the second needs to play, before the third starts (42.6ms) for a total of 128ms.

Sorry I don't get it (probably because I'm exhausted tonight). You fill the first buffer (takes a few ms), then the music starts to play and never stops while other buffers are recycled and filled in background. So to me, the latency equals the few ms needed to fill the very first buffer, that's all.
Laurent Gomila - SFML developer

Zcool31

  • Newbie
  • *
  • Posts: 13
    • View Profile
SoundStream Latency
« Reply #16 on: January 13, 2011, 11:12:37 pm »
Quote from: "Laurent"
Quote
To start playing, a buffer first needs to be filled, that's the whole reason, and also why latency can never be lower than the sound cards minimum buffer size.

Ok but it's very fast to fill, I still don't see why the duration of the buffer equals the latency.

Quote
Using multiple buffers is basically the same as one big buffer in regards to latency, as before the third buffer starts playing, the first needs to be filled (42.6ms), the first needs to play while the second is filled (42.6ms) and then the second needs to play, before the third starts (42.6ms) for a total of 128ms.

Sorry I don't get it (probably because I'm exhausted tonight). You fill the first buffer (takes a few ms), then the music starts to play and never stops while other buffers are recycled and filled in background. So to me, the latency equals the few ms needed to fill the very first buffer, that's all.


What Laurent said above is completely true. However, from the perspective of a synth, latency is the delay between when a change occurs on the input and when that change is reflected in the output. In my synthesizer, I generate samples continuously, so the delay between the time an input changes and when my newly generated samples reflect this is negligible. However, I'm not able to provide these new samples to OpenAL until it finishes processing one buffer. Then, OpenAL will not get around to playing these samples until it's gone through all the other buffers and come around to the one I just filled.
In effect, even though my synth can respond to input nearly instantaneously, you won't hear changes on the output until OpenAL gets around to playing them.

l0calh05t

  • Full Member
  • ***
  • Posts: 200
    • View Profile
SoundStream Latency
« Reply #17 on: January 13, 2011, 11:13:35 pm »
Ok, I'll try a different example:

Let's say you have a game where the music changes, depending on some event E (using two buffers, 1 and 2)
At first, the event has not occured, so you are playing music A, and fill buffer 1 with it and enqueue it. While buffer 1 is playing, you have to fill buffer 2, as otherwise this would lead to stuttering when buffer 1 ends before buffer 2 was enqueued.
Now E occurs. But what can you do? Nothing, as buffer 1 and 2 are enqued. So you wait until buffer 1 becomes free, incurring latency (corresponding to buffer 1's length).
When buffer 1 is free, you fill it with music B and enqueue, but music A is still playing while buffer 2 is being emptied, incurring yet more latency (corresponding to buffer 2's length).

Could you have dequed buffer 2 and filled it with music B, while buffer 1 was still playing? No, because it is not really known when buffer 1 will end, and even the smallest delay would result in stuttering.

Zcool31

  • Newbie
  • *
  • Posts: 13
    • View Profile
SoundStream Latency
« Reply #18 on: January 14, 2011, 12:09:05 am »
l0calh05t, that is exactly right. There is no way around this problem, you can only hope to minimize it by using small buffers.

That being said, I would like to pose a different question. It might be better suited for another thread, but it is for this project so I'll ask it here as well.

I have essentially a producer consumer problem (I produce samples with my synth that are consumed as I stream them to audio output in a different thread). I know this can be solved simply using semaphores. However, SFML only has support for mutexes (if I'm not mistaken).

Can I still solve this problem using only what SFML provides, or should I just go out and get a simple threading library and use its implementation of semaphores?

l0calh05t

  • Full Member
  • ***
  • Posts: 200
    • View Profile
SoundStream Latency
« Reply #19 on: January 14, 2011, 12:12:58 am »
Quote from: "Zcool31"
l0calh05t, that is exactly right. There is no way around this problem, you can only hope to minimize it by using small buffers.

That being said, I would like to pose a different question. It might be better suited for another thread, but it is for this project so I'll ask it here as well.

I have essentially a producer consumer problem (I produce samples with my synth that are consumed as I stream them to audio output in a different thread). I know this can be solved simply using semaphores. However, SFML only has support for mutexes (if I'm not mistaken).

Can I still solve this problem using only what SFML provides, or should I just go out and get a simple threading library and use its implementation of semaphores?


producer-consumer problems can be solved with mutexes alone, but I would rather recommend having a look at boost::thread and using that (it includes far more powerful concepts such as condition variables)

Zcool31

  • Newbie
  • *
  • Posts: 13
    • View Profile
SoundStream Latency
« Reply #20 on: January 14, 2011, 12:27:00 am »
Thanks for the tip. However, I think I'd be more comfortable using something like pthreads. With that said, how would you solve the producer consumer problem using only mutexes, but without having either thread constantly looping?

I could very easily do this, but I would prefer not to

Code: [Select]
int samples = 0;
const int limit = 100;
Mutex mut;

void produce(){
    while(true){
        mut.lock();
        if(samples<limit){
            //make a sample
            ++samples;
        }
        mut.unlock();
    }
}
void consume(){
    while(true){
        mut.lock();
        if(samples>0){
            //consume a sample
            --samples;
        }
        mut.unlock();
    }
}

l0calh05t

  • Full Member
  • ***
  • Posts: 200
    • View Profile
SoundStream Latency
« Reply #21 on: January 14, 2011, 12:34:51 am »
Quote from: "Zcool31"
Thanks for the tip. However, I think I'd be more comfortable using something like pthreads. With that said, how would you solve the producer consumer problem using only mutexes, but without having either thread constantly looping?


pthreads is posix-only. boost is cross platform. sure you want to stick with "something like pthreads"?

Zcool31

  • Newbie
  • *
  • Posts: 13
    • View Profile
SoundStream Latency
« Reply #22 on: January 14, 2011, 12:37:50 am »
Quote from: "l0calh05t"
Quote from: "Zcool31"
Thanks for the tip. However, I think I'd be more comfortable using something like pthreads. With that said, how would you solve the producer consumer problem using only mutexes, but without having either thread constantly looping?


pthreads is posix-only. boost is cross platform. sure you want to stick with "something like pthreads"?


Pthreads are a set of specifications. Source-compatible implementations exist for most systems. I was looking at pthreads-win32 http://sourceware.org/pthreads-win32/

devlin

  • Full Member
  • ***
  • Posts: 128
    • View Profile
SoundStream Latency
« Reply #23 on: January 14, 2011, 09:17:25 am »
std::thread from c++0x is another option if your compiler(s) supports it. (not sure if GCC on MacOS is high enough version yet)

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32498
    • View Profile
    • SFML's website
    • Email
SoundStream Latency
« Reply #24 on: January 14, 2011, 01:31:14 pm »
I understand your examples, thanks.

There's still something that I don't understand though. The idea behind all these examples is that you have a large amount of samples queued, and suddenly want to play new samples immediately, so basically you want to discard the data which is buffered and replace it with the new samples. Am I right?

First, you have valid data queued and ready to be played, so if you want to get rid of it you must tell the audio system to do so (music.Stop() for example). Secondly, this is not what a sound stream is supposed to do. Streams are continuous flows of data. All the data that you enqueue is supposed to be played, it's like a network connection, what you send is received, you cannot suddenly remove data which is in the pipe and restart with something new.

So my conclusion is that to do what you describe, I wouldn't use a SFML sound stream at all, this looks totally different to me.

Sorry if there's still something that I misunderstood :)


Quote
That being said, I would like to pose a different question. It might be better suited for another thread, but it is for this project so I'll ask it here as well.

Seriously, I don't think it helps to have such unrelated discussions in a single thread ;)
Laurent Gomila - SFML developer

l0calh05t

  • Full Member
  • ***
  • Posts: 200
    • View Profile
SoundStream Latency
« Reply #25 on: January 14, 2011, 03:43:05 pm »
Quote from: "Laurent"
There's still something that I don't understand though. The idea behind all these examples is that you have a large amount of samples queued, and suddenly want to play new samples immediately, so basically you want to discard the data which is buffered and replace it with the new samples. Am I right?


Not quite. Discarding data/changing buffers after the fact is a bad idea, if it works at all. But reducing the buffer length, will automatically result in lower latency, without any discarding. But there's really nothing you can do SFML-wise as OpenAL simply doesn't support the kind of streaming required (the fact that streaming itself is only possible with polling is a sign of a really really bad API design if you ask me... wonder what the designers of OpenAL were thinking)

Quote

First, you have valid data queued and ready to be played, so if you want to get rid of it you must tell the audio system to do so (music.Stop() for example). Secondly, this is not what a sound stream is supposed to do. Streams are continuous flows of data. All the data that you enqueue is supposed to be played, it's like a network connection, what you send is received, you cannot suddenly remove data which is in the pipe and restart with something new.


What does a synthesizer or a continuous stream of dynamically changing music produce? Not individual sounds which can be played one after the other (or just stopped in the middle), but a continuous stream of samples. Individual 20µs samples. Streaming is the only option. Not via OpenAL though, since it isn't designed for low-latency streaming.

Quote

So my conclusion is that to do what you describe, I wouldn't use a SFML sound stream at all, this looks totally different to me.

Sorry if there's still something that I misunderstood :)


I wouldn't use SFML sound stream either, but a portaudio device, which also streams but with a single, small buffer. Still a stream though ;)

Zcool31

  • Newbie
  • *
  • Posts: 13
    • View Profile
SoundStream Latency
« Reply #26 on: January 14, 2011, 09:30:49 pm »
Regarding multithreading and semaphores, I did a search after I asked that question and found a post where Laurent says SFML doesn't have support for semaphores.

Regarding the rest of it, I would never want to discard queued valid music data. What I would like to do is not need to have data wait in a queue before being played. Of course this is impossible, so the next best thing is to queue as little data as possible.

The only reason I might want to queue a large number of samples is if, for whatever reason, it is faster to generate large chunks of samples together rather than one sample at a time. I specifically wrote my synth so that it takes a fixed amount of time to generate each sample (this depends on the complexity of the synth, but will never change during runtime).

My next step is making a GUI to create synths. Each synth consists of components similar to VST plugins, and I need a simple way to connect and configure these components, then output this configuration to a file so that later it can be loaded and used to play music (essentially simplified MIDI).

If I have problems on the SFML side, expect a thread in the Graphics part of the forum.

Now I need to get to writing my own text field input...