This project has moved. For the latest updates, please go here.

playing many sounds independently

Nov 24, 2011 at 3:05 PM

Hi. In my app I need to play many sounds independently. From reading discussions and examples I am assuming I should use WaveMixerStream32. I have no idea how to connect all elements so they work properly. For me schema of using it looks like:  many(WaveReader->WaveChannel )->WaveMixer->DirectOut  . But in this schema only DirectOut has Play() and Pause() methods - so how can I Play and Stop files separately?

I would like to have the possibility of starting and finishing playback of sounds for example when some events are firing (i.e. playSound1(), playSound2(), stopsSound1(), stopSound2() - fired by different buttons).

Can you give me some examples how can I achieve this?

Nov 25, 2011 at 11:05 AM
Edited Nov 25, 2011 at 1:06 PM

I'm doing exactly this. 

I am using the sine wave generator that Mark has posted as an example, but you can use existing wave fragments as well.

Basically, any IWaveProvider.

Each note played is a separate WaveStream (this can be optimized later).

I'm hooking these up through a WaveChannel with WaveMixerStream32's AddInput method,

but after passing through a couple more WaveStreams:

  • WaveOffsetStream, to delay the sound with a given parameter
  • FadeStream (easy to write), which fades in and out at given moments - this is required to remove audible clicking

I also want to add an

  • StopSignallingStream

that raises an event when its Read() reads 0 bytes, or fewer bytes than intended, assuming the end of the input;

and use the event to call RemoveInput a stream from the mixer when it has finished.

It appears to work well, but I haven't tested with a big number of notes yet.

I've only just started.  I'm very happy with the library's capabilities and ease of use.

The only thing I, as a total newbie in audio programming, find confusing is the implicit constraints in the design and implementation, for instance:

  • The WaveStream classes are Streams, so they have all Stream methods , but only a few of them (mainly Read()) are supposed to ever be used - I've come to realize that when using them I should regard them as IWaveProviders that just happen to use (Wave)Stream for implementation.
    For clarity, I'd prefer if they didn't expose these methods in the first place.
  • Different WaveStreams read and produce different sets of WaveFormats - it would be helpful if this was evident from their types so it could be checked statically.  There could be a whole family of Read*() methods instead of just one or two.