This project has moved and is read-only. For the latest updates, please go here.

How to combine multiple captures of audio?

Jun 25, 2013 at 12:42 AM
Hey guys,

New problem for you here today xD. So what I want to do is capture both the speaker audio and microphone audio and save that data in a single WAV file. Is it possible to stream it directly to this to one audio file using Naudio? or do i have to record them seperately and merge them together? Either way is fine. The idea is that these two different sources of audio, when put together form a conversation. so I am not looking to concatenate, but I am looking to merge or layer them on top of each other.

Thanks in advance for the response!

Jun 25, 2013 at 2:29 PM
you would need to mix the two recordings together afterwards. If the two recordings were both continuous, of the same sample rate, and started at exactly the same time, then this would be relatively straightforward. I'd use the MixingSampleProvider to mix them, and convert down to 16 bit to write to a file afterwards.
Jun 25, 2013 at 11:58 PM
Edited Jun 26, 2013 at 12:00 AM
I'm kind of having a hard time implementing this, could you show an example or link one?

Also out of curiosity, is it at all possible to read in the two audio files as I am writing them? to sort of simulate it being writing to one file on the fly? This way the user doesn't have to wait after making the recording for the two files to merge.

Jun 27, 2013 at 4:01 PM
here's a simple code snippet that will work with the alpha release of NAudio 1.7 (get it on NuGet). The two input files must have the same sample rate and channel count for this to work
using(var reader1 = new WaveFileReader("file1.wav"))
using(var reader2 = new WaveFileReader("file2.wav"))
    var inputs = new List<ISampleProvider>() {
    var mixer = new MixingSampleProvider(inputs);
    WaveFileWriter.CreateWaveFile16("mixed.wav", mixer);
Jun 27, 2013 at 4:02 PM
... and yes, mixing on the fly can be done with the MixingSampleProvider, with BufferedWaveProviders as the input, but the code sample would be a little more convoluted to write and I don't have an example to hand at the moment.
Jun 27, 2013 at 9:41 PM
the function "ToSampleProvider()" doesn't seem to pop for me. is there any specific namespace i need from Naudio maybe?
Jun 27, 2013 at 9:47 PM
that's in the latest alpha of NAudio 1.7. Under the hood its just calling SampleProviderConverters.ConvertWaveProviderIntoSampleProviderso use that instead.
Jun 27, 2013 at 10:42 PM
Hmmm The only one that pops up for me is "SampleProviderConverterBase" and there is no "ConvertWaveProviderIntoSampleProviderso" under there. Sorry I'm sort of new to C#, What am i doing wrong here xD?
Jun 27, 2013 at 10:54 PM
ohh okay. so i installed the naudio 1.7 alpha 05 version, and i have that function now. but now i get an error "Unsupported source encoding". this happens once it reaches the list of ISampleProvider. Any ideas?
Jun 27, 2013 at 11:32 PM
Edited Jun 28, 2013 at 12:03 AM
okay so i found out that the two files that i have been trying to mix together have different wave bit depths. one is 16 and the other is 32.

So what i did was change the one of my voice to 32 bit as well.

but i still have the same problem...

But i found out using the demo application "Audio File Inspector" that there are some differences in the audio files but i don't know how to change these differences. I'll post them below.

My Voice:
Opening C:\Users\Stefano\Desktop\input\06-27-2013\06-27-2013[15.53.14].wav
Pcm 44100Hz 2 channels 32 bits per sample
Extra Size: 0 Block Align: 8 Average Bytes Per Second: 352800
WaveFormat: 32 bit PCM: 44kHz 2 channels
Length: 2214912 bytes: 00:00:06.2780000

My Speakers using WasapiLoopbackCapture class:
Opening C:\Users\Stefano\Desktop\output\06-27-2013\06-27-2013[15.53.14].wav
Extensible 44100Hz 2 channels 32 bits per sample
Extra Size: 22 Block Align: 8 Average Bytes Per Second: 352800
WaveFormat: 32 bit PCM: 44kHz 2 channels
Length: 2192848 bytes: 00:00:06.2160000
Chunk: fact, length 4
BA 2E 04 00

Any ideas how to change the Second one to be like the first?

Jul 1, 2013 at 2:54 PM
I can never remember whether the WASAPI loopback capture is 32 bit int or 32 bit floating point. I think its the former. but check the SubFormat property (should be either AudioMediaSubtypes.MEDIASUBTYPE_IEEE_FLOAT or AudioMediaSubtypes.MEDIASUBTYPE_PCM)

If it is 32 bit int, use Pcm32BitToSampleProvider, and if it is 32 bit float use WaveToSampleProvider