This project has moved and is read-only. For the latest updates, please go here.

Streaming question

Jun 18, 2011 at 12:14 PM
Edited Jun 18, 2011 at 3:01 PM

Hi there,

I have a question in a streaming scenario.

Scenario that is working at the moment: I have one client that uses WaveIn to query samples from a mic, convertes these samples via vodecs into compressed bytes, send them across a network to a server. The server decodes the sample and uses BufferedWaveProvider to output to a output device.

Now I need a hint on how to implement the following scenario: I have several client computers, that all send their samples to the same server. This server needs to mix the samples and output it.

I tried sticking to WaveOut and BufferedWaveProvider methodology, no luck there. I guess I have to do use the Mixer class, but I fail to chose the correct WaveStream implementation.

I hope anyone can help me on this!!

Thanks

Jun 18, 2011 at 9:01 PM

Hi YogiB

In NAudioWpfDemo (available with source) there is a DrumMachineDemo.
That project is mixing multiple audio files using MixingSampleProvider.
It also has WaveToSampleProvider and SampleToWaveProvider classes for converting between IWaveProvider and ISampleProvider

Jun 18, 2011 at 9:06 PM

Hi YogiB, hasankhan has pointed you in the right direction. I never did a mixer that just worked with IWaveProviders, although you can easily create a simple adapter class to convert an IWaveProvider into a WaveStream, but in NAudio 1.5, the MixingSampleProvider will be the recommended way for anyone wanting to do mixing with 32 bit floating point audio. If you don't want to switch to the new ISampleProvider interface, you can also easily adapt the existing WaveMixerStream32 to work with IWaveProvider instead of WaveStream.

Mark

Jun 19, 2011 at 11:46 AM

Hi hsankhan and mark,

thanks a lot for your valuable answers - you definitely pushed me in the right direction and I managed to complete the task.

This is how I ended up with for those who are interested:

Output pipeline (set up just once):

  • A MixingSampleProvider with IEEE format
  • Feed this to SampleToWaveProvider
  • Feed this to WaveOut

For each "client" that broadcasts it's samples from mic:

  • Created a WaveProvider with original wave format (usually 16 bit PCM, 1 channel only)
  • Feed this to stereo using MonoToStereoProvider16
  • Feed this to Pcm16BitToSampleProvider

On each sample I receive on the network, I simply feed it to the waveProvider's AddSample method. And you need to modifiy WaveOutBuffer code, because it would stop playing back when no data was present (no client connected at all, client send timeout, ...). So I never stop playing unless explicitely telling it to.

I hope this helps people. Thanks again for your answers.

I think that the audio pipeline is very powerful, but learning curve is high in the beginning as you not intuiteively know which helper and converter classes to use and format's are expected unless by trial and error.

Jun 22, 2011 at 9:17 AM

Glad you got it working in the end. Understanding the audio pipeline is the key to using NAudio effectively. I'm hoping to make a video tutorial at somepoint to explain it properly.

Mark

Jul 9, 2011 at 4:05 PM

Is there any way this can be accomplished without using SampleProviders, or some more detail on this?

Cheers!

Luke