I'm highly impressed with SFML, but I have some questions regarding its sound libraries:
How is a multi-channel sound que/array done? (Ex: Que has .wav1 & .wav2 stored in it. Channels A & B can have sounds sent from the que to them via, say a function in the application's code. In this way, sounds are loaded into a storage area for ease of reference within the program.)
Is there a built-in way to route sounds to different channels depending on situation? (Ex: A .wav1 normally plays on channel A, but .wav2 is currently playing on channel A. Can .wav be routed to channel B?)
Can individual channels be individually managed? (Ex: Channel A is to be at 100% volume, channel B needs to be at 50%.)
I've heard that playing multiple sounds simultaneously lowers the volume in some circumstances. Can this be averted by using multiple audio channels? (Ex: .wav1 on channel A plays at 100%, .wav2 plays on channel B at 100%. Both need to sound like they do at full volume.)
Is it possible to give a "priority tag" to a sound, so as to determine what channel it plays on? (Ex: Channel A is at 100% volume, Channel B is at 50% volume. .wav1 has a rank of 5, so it outranks .wav2, which is a rank 2 sound. If both sounds try to play simultaneously, .wav1 will always get Channel A, the priority channel. However, if .wav1 is not playing, .wav2 can use Channel A.)
I do not have any source code at this time, but decided to post this to see if there are answers to my questions, before I get into a tangled programming mess.
Thanks in advance.
-Cyrano