Hi all,
I am trying to figure out a way to use the SFML toolkit in a real-time audio synthesis project. (As a kind of experiment, just for fun.)
SFML would be handy since I can use the RenderWindow etc to create a basic UI.
It appears to me that there are 2 methods I could use.
The first is with sf::Sound and sf::SoundBuffer.
The second is with sf::SoundStream.
Let us consider a program which plays a 440 Hz sin wave when I press a key, such as the Z key on my keyboard, otherwise silence is played.
Let us discuss the first, first.
***Soundbuffer***
I have two possible ideas here, although I don't think either will work particularly well. The first idea is to generate a buffer, say for 0.1 s of sound, fill it as the event loop is processed by checking the elapsed time and filling in that duration of sin wave or silence depending on whether the Z key is pressed. Then every time the event loop "goes over" a 0.1 s interval, play the new buffer contents and start generating a new buffer to play.
This method has a problem. Typically, there will be a delay between the thread which is playing the audio finishing playing samples, and a new call to sf::Sound.Play() to play the new buffer. In addition, the sound playing will always be delayed by 0.1 s. (Which is quite large, and results in a horrendous 0.1 s low frequency noise being added to the signal by the continual stop/starting.)
To reduce this problem, one could allocate enough memory for 30 s of audio, press the keys, and then hear the output 30 s later. Obviously the problem here is you are limited to 30 s of audio, and you can't hear what you're playing until afterwards.
The only resolution would be to detect when the playing thread has finished, and then immediately play some new buffer. This will reduce delays, but there will still be some, and I am unsure if this can be done... I think there may be a better way.
***SoundStream***
After attempting to implement the first method, and finding essentially that "it doesn't work very well", as you probably suspected, I did some more researching and found the sf::SoundStream object.
Is this the correct object to use for this task?
I have had a look at the VOIP example, but found this didn't really help much as there is a load of network stuff there which makes it difficult to decipher what is happening if one is not familiar with network programming.
After studying this example, I found a further problem which is that this example appears to work by recording sound samples on one PC and sending them to another. Hence it doesn't really matter too much if there is an associated delay. (I am not sure what this delay might be. I assume less than ~ 0.2 s, but it won't really matter for a phone conversation so long as it's not "too" large.)
Going back to sf::Soundstream, it is my understanding that I must overload the "onGetData" and "onSeek" functions in my own derived class.
What are these functions supposed to do, and how should I overload them? I assume the onGetData() function should "generate new samples", but how many? Where should I store these samples? How can I discard old samples which are no longer required? What if I want to simultaneously record a .wav file to media?
Finally, what does the "onSeek" function do? Why would I need to "seek" in an audio bitstream if I am writing something like a synthesizer?
Apologies for the long question, I tried to make it clear what I am asking.
By the way, I haven't actually asked the main question yet, which is; "is soundstream" the best method of implementing this? Is there another, better method? Perhaps I should be using another external library which is not SFML based at all?