Hi, I just read the tutorial and api documentation for the audio module: It
sounds awesome!
I plan to introduce an audio system to my game, which is already using SFML. But I wonder how to organize the system itself. I'm going to build an entity-component-based topdown dungeoncrawler. So my physics system holds physics-related data, such as position, face direction etc. My render system holds the corresponding sprites and lights per object.
I'd design my audio system in order to allow playing two instances of
sf::Music: One as music itself and one as ambience track. So holding two instances and calling
openFromFile() and
play etc. would be the best solution. But I wonder how to organize entity sounds. Of course all
sf::SoundBuffer instances should be cached to avoid double loading etc. My caching also guarantees that the buffer will be valid until the game session is closed.
For instance: An entity might cast a fireball, so a fireball-ish sound could be played when the fireball is created and starts flying towards its target. But what's the most suitable way to organize those sounds? After reading about
sf::Listener, I'd also like to spatialize entity-related sounds.
Because the number of parallel sounds is limited, storing a
sf::Sound per entity wouldn't be that great. So I think of a pool of reusable sounds: When a sound effect is triggered (e.g. with parameters sfx-name, entity position and other spatilization details) it might work that way:
- Grab the next (yet unused) sf::Sound.
- Grab the sf::SoundBuffer from the cache and apply it
- Setup sound position and other spatialization details.
- Finally play it
Would this be suitable? If yes, here's my next question
How to organize that pool? Having a set of (let's say 50) instances of
sf::Sound inside a
std::vector, beeing preinitialized at the system's startup. Searching for a "free spot" (where the sound is currently not playing) would lead to linear search and O(n). Well, depending on n, it might be fast. But is that a good solution? Previously, I was working with SDL. If I remember correctly there is an opportunity to specify a callback, which is called when the sound stops. I didn't find something similar in the API docs... Is there such a solution?
And yet another question: If no free spot can be found, because there are many sounds currently played... How to handle this? The "intuitive" solution might be to stop the sound which was played first, but that would imply sorting the sounds by the order of playback. Another solution would be to ignore the sound (or even throw an exception). I don't like this solution
Or: Is that situation utopy or even/ bad design?
/EDIT: And the final question: If working with spatialized audio... I'd set the listener position to the player's position. But what if multiple players share one screen (doesn't matter whether splitted or shared)... I could calculate the bary point of those players and set the listener position to that bary point. But this would lead to wrong results if the players are too far away from each other. Can spatialization be applied to such situations?
Kind regards
Glocke