This project has moved and is read-only. For the latest updates, please go here.

Buffer lenght in read wavestream

Mar 14, 2013 at 10:27 AM

I'm beginner to NAudio. I'm working on tutorial 6 ... and made some change to test.

So, i now have sourceStream_DataAvailable handler that fire event on a microphone source stream.

I implement a TestWaveStream derived from WaveStream. My sourceStream_DataAvailable handler Send his recorded buffer to TestWaveStream .... (i'll add effect here in the future)

Then i have a WaveOut Stream linked to TestWaveStream. In the read function, i want to pass the data that i previously aquire in sourceStream_DataAvailable.

for both stream in and out, i 'm using WaveFormat 16000Khz, 16 bits, one channel.

Here, i have a problem because in the Read function, lenght are not the same for in and out.
I mean : the buffer in sourceStream_DataAvailable has a 3200 bytes lenght and in the read function, the argument buffer has a 2560 bytes lenght and a 1280 offset. So i don't understand why the buffer in the read function doesn't has a lenght of 3200, then i could copy input buffer to output one directly.

Did i miss something ? could someone explain me ?

Mar 14, 2013 at 3:25 PM
The buffer sizes of record and playback may not be the same. Use a BufferedWaveProvider for an easy way round this. Add audio as you receive it into the buffered wave provider.
Mar 14, 2013 at 8:37 PM
Thank you for your quick and nice answer, it runs perfectly.

I have another question : When streaming Mic to loudspekaer, the volume is really low compared with playing mp3.

I implement a MixerLine to control microphone level, it works fine in certains limits

I also implement "waveOutSetVolume" from "winmm.dll", it works but not as fine as i expected. Level is not very high.

So using the solution you gave i multiplicate every samples in the buffer (converting to int16, multiplicate,reconvert to byte array) by a defined value before passing it to BufferedWaveProvider . It seems to give good results but i'm not really sure it is a good way.

I red about WaveChannel32 here : but i didnt understand how to adjust volume using it ... and i didnt find more informations.

Could you please tell me which way is the best ? or is there another one i didn't found ?

Mar 18, 2013 at 12:24 PM
WaveChannel32 adjusts the volume of each individaul sample as it passes through. 1.0 means no change, 0.0 means silence, and 2.0 will double the amplitude of each sample, although you risk clipping.

WaveChannel32 is probably a bit cumbersome. I'd use the Wave16ToSampleProvider and then a VolumeSampleProvider before finally a SampleToWaveProvider16 at the end efore playback.