A couple of general questions

Nov 13, 2010 at 7:27 PM
Edited Nov 13, 2010 at 10:39 PM


I've just discovered NAudio, read some tutorials and now I got a few questions. I am developing a client-server chat for fun, which is working good, and I am now looking to extend it with VOIP.

1. Network-side: Currently I am using a TCP connection between server and client for chat room and alike. Is it impossible to use TCP for voice chat? I basically want ventrilo/teamchat/skype latency, of course a little more that isnt noticable is fine. How much does TCP delay the voice chating?

If TCP isn't usable, I guess UDP is the choice then? (I've read about RTC but hell I can't find any resources on it, does it really make that big difference?)


2. I'm basically at the part where I got stuff like this;

// Recording
    private IWaveIn waveIn;
     private WaveFileWriter writer;


               this.waveIn = new WasapiCapture();
                this.writer = new WaveFileWriter("pimpz.wav", waveIn.WaveFormat);
                this.waveIn.DataAvailable += new EventHandler<WaveInEventArgs>(waveIn_DataAvailable);
                this.button_stopRec.Enabled = false;


private void startRec_Click(object sender, EventArgs e)
            this.button_startRec.Enabled = false;
            this.button_stopRec.Enabled = true;

        private void stopRec_Click(object sender, EventArgs e)
            this.waveIn = null;
            this.writer = null;
            this.button_stopRec.Enabled = false;
            this.button_startRec.Enabled = true;

        private void waveIn_DataAvailable(object sender, WaveInEventArgs e)
            this.writer.WriteData(e.Buffer, 0, e.BytesRecorded);


.....so I'm just recording to a file. What is the best way to extend this so you send it over a network? In my program I send convert data into byte[] and send the byte[]. Does it work this, that you simply instead of write to a file, write to a byte[] buffer and then simply send it?


3. Since stuff gets pretty big when recording (judging how quick the typical recorded files become) I've read about some encoding methods, NSpeex being one. In short, how does it work? As I imagine, I got this byte[] buffer from recording, do I pass it to a encoder before sending it over the network? And then decode on the other side? Are there any other steps involved?


4. Is it possible to only record audio from the microphone and nothing else? For example, total silence from the user shouldn't generate any event to send data. I've read about mixers in some tutorial (which I guess handles this), but exactly where do they come in? For example, in my code above. Where would the mixer-class fit in?


5.  The waveIn.DataAvailable triggers an event to read data from the audiobuffer. Basically how often does this get triggered, how many bytes get read? For example if I want to store the stuff read into a byte[], I want to know the size of what can be recorded so I can adjust my buffer size accordingly.


6. Whats the difference between Wasapicapture and WaveIn when recording?



Nov 22, 2010 at 6:11 PM


Nov 24, 2010 at 2:06 PM

thats a lot of questions you have there

1. getting low latency network streaming is hard. NAudio doesn't solve this problem for you - you would need to write your own code. You also need to specify low buffer sizes for WaveIn and WaveOut

2. yes, although you might want to compress the data in some way before it goes over the network

3. yes, you need an encoder on one side and a decoder on the other

4. you would need to detect that the input from the microphone is silence. in reality, most microphones will pick up some background noise, so you would need a noise threshold

5. how often it gets triggered completely depends on the buffer sizes you set up with WaveIn

6. Wasapicapture will work on Vista/Win7 only and is less flexible about the sample rate you capture at. I suggest using WaveIn instead