Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: Allow changing processing interval in SoundStream  (Read 10129 times)

0 Members and 1 Guest are viewing this topic.

unlight

  • Newbie
  • *
  • Posts: 33
    • View Profile
    • Email
Allow changing processing interval in SoundStream
« on: May 27, 2020, 01:51:41 am »
Hi guys,

https://github.com/SFML/SFML/pull/1517

I can see this pull reques is currently closed, but also on the discussion board for SFML 2.6 (and a very worthwhile feature). Lets discuss.

Is this a feature we want for 2.6 given the recently reduced scope, and what exactly is the current state of the PR (testing or review)?

eXpl0it3r

  • SFML Team
  • Hero Member
  • *****
  • Posts: 11030
    • View Profile
    • development blog
    • Email
Re: Allow changing processing interval in SoundStream
« Reply #1 on: May 27, 2020, 12:46:33 pm »
I kept it on the board, because it would probably otherwise be lost in the abyss.
I think it makes sense to implement this feature, it doesn't have to be for SFML 2.6, but if someone picks it up and makes a mergable PR before we get scancodes done, I'd be more than happy to merge it.
Official FAQ: https://www.sfml-dev.org/faq.php
Official Discord Server: https://discord.gg/nr4X7Fh
——————————————————————
Dev Blog: https://duerrenberger.dev/blog/

unlight

  • Newbie
  • *
  • Posts: 33
    • View Profile
    • Email
Re: Allow changing processing interval in SoundStream
« Reply #2 on: June 04, 2020, 08:02:33 am »
Hi guys,

Do you have any thoughts regarding my comment on the Audio Processing Interval PR?

https://github.com/SFML/SFML/pull/1666

Quote
I have updated the test which audibly demonstrates the change in processing interval. Interestingly, changing the processing interval to zero does not produce the outcome that I would have expected in terms of reducing audio latency. At the standard 10ms interval, you want about 20 to 30 ms of audio per chunk to prevent the audio device from starving, but reducing the interval to zero does not reduce the amount of audio that you need per chunk. I suppose this would also require a change to the internal audio buffer sizes.

This is not a bug, but maybe an indication that a tunable processing interval alone is not enough to acheive the desired outcome?