This project has moved and is read-only. For the latest updates, please go here.

Class for Audio Spectrum

May 13, 2011 at 9:51 PM
Edited May 13, 2011 at 10:28 PM

Which class does i have to use to create an audio spectrum? Sorry i had a look on your wpf sample but it is realy confusing because i found nothing to what gets and float array or something else with the data.

EDIT: I found something and implemented it but it is soo slow:

public void Add(float value)
        {

            if (PerformFFT && FftCalculated != null)
            {
                fftBuffer[fftPos].X = value;
                fftBuffer[fftPos].Y = 0;
                fftPos++;
                Console.WriteLine(fftPos + "   " + fftBuffer.Length); //Here it is too slow takes about 1-2 min until it is 1024
                if (fftPos >= fftBuffer.Length)
                {
                    fftPos = 0;
                    // 1024 = 2^10
                    FastFourierTransform.FFT(true, 10, fftBuffer);
                    FftCalculated(this, fftArgs);
                }
            }

            maxValue = Math.Max(maxValue, value);
            minValue = Math.Min(minValue, value);
            count++;
            if (count >= NotificationCount && NotificationCount > 0)
            {
                if (MaximumCalculated != null)
                {
                    MaximumCalculated(this, new MaxSampleEventArgs(minValue, maxValue));
                }
                Reset();
            }            
        }
    }

May 16, 2011 at 8:18 AM

In the SampleAggregator class in the NAudio WPF demo, it waits until it has enough samples and then calls an FFT

Mark

May 16, 2011 at 6:16 PM

 

Yes i know that, but thats not my problem.
If i look at the Console.WriteLine(fftPos...); In my programm thats very slow so i have to wait about 10 seconds until 1024 is displayed. In the wpf application of you i just have to wait about 1-2 seconds. So why?

I'm using a bufferedwaveprovider. Is that the problem?

May 16, 2011 at 6:58 PM

you should get to 1024 very quickly. Add needs to be called for every sample, and there should be 44100 of them a second. When are you calling Add?

May 16, 2011 at 7:54 PM

i have modificed the bufferedwaveprovider to use this in an livestream.:

        public void LoadNextChunk(byte[] source, int samplePairsRequired)
        {
            int sourceBytesRequired = samplePairsRequired * 4;
            sourceBuffer = GetSourceBuffer(sourceBytesRequired);
            
            sourceSamples = sourceBytesRequired / 2;
            sourceBuffer = source;
            sourceWaveBuffer = new WaveBuffer(sourceBuffer);
            Console.Write("sourceBytesRequired: " + sourceBytesRequired + "  Byte[50]: " + source[50] + "\n");
            sourceSample = 0;
        }

        public int Read(byte[] buffer, int offset, int count) 
        {
            int read = this.buffer.Read(buffer, offset, count);
            if (read < count)
            {
                // zero the end of the buffer
                Array.Clear(buffer, offset + read, count - read);
            }


            #region left/right
            float left, right;

            ISample.LoadNextChunk(buffer, count);
            Web_LiveStream_Filoe.std_Classes.Error.Commentary("Count: " + count + " Byte[50] : " + buffer[50] + "  " + ISample.GetNextSample(out left, out right).ToString());
            
            left = (pan <= 0) ? left : (left * (1 - pan) / 2.0f); //pan 0.0f
            right = (pan >= 0) ? right : (right * (pan + 1) / 2.0f);
            left *= volume;
            right *= volume;
            RaiseSample(left, right);

            //Stackoverflow weil ich in read wieder read aufrufe -->
            //Aber ich lese in read() in buffer die bytes dadurch kann ich in Stero16SampleProvider Methode umschreiben, dass ich kein
            //Waveprovider brauche sondern einfach nur ein byte arry.
            #endregion

            return count;
        }



and so i also had to edit the Stereo16SampleProvider:
        #region SampleEvent
        public event EventHandler<SampleEventArgs> Sample;

        private SampleEventArgs sampleEventArgs = new SampleEventArgs(0, 0);

        /// <summary>
        /// Raise the sample event (no check for null because it has already been done)
        /// </summary>
        private void RaiseSample(float left, float right)
        {
            
            sampleEventArgs.Left = left;
            sampleEventArgs.Right = right;
            Sample(this, sampleEventArgs);
        }
        #endregion


And then i am calling that simple in an event:

        private void bufferedWaveProvider_Sample(object sender, SampleEventArgs e)
        {
            wavPaint.AddMax(e.Left);
        }

 

May 16, 2011 at 7:58 PM

hmmm, the classes are not really supposed to be used that way. the root cause of your problem is that your read method only raises one sample event rather than one for every sample read.

May 16, 2011 at 8:03 PM

hmm what would you do to change that?

May 16, 2011 at 8:04 PM

if you are streaming, use a BufferedWaveProvider as an input to a WaveChannel32. The WaveChannel32 can provide sample events. If WaveChannel32 needs a WaveStream you might need to make a simple adapter class to connect the two together.

Mark

May 16, 2011 at 8:17 PM

ok i will try :) tanks

May 18, 2011 at 8:01 PM

ok i have one more question. Why is in the WaveChannel32 the Read Method called soo often and not in the bufferedwaveprovider. Where exacly is that method called?

May 18, 2011 at 8:35 PM

If I am understanding what you are doing right, you are passing a BufferedWaveProvider into the WaveChannel32. So when the sound card wants more data, it calls WaveChannel32.Read. WaveChannel32.Read will call a LoadNextChunk and the sample chunk converter will go off and Read from the BufferedWaveProvider to get more data.

Mark

May 18, 2011 at 9:08 PM
Edited May 18, 2011 at 9:19 PM

ok thanks a lot. But i have 2 more questions :D. First question is still why soundcard calls about 44100 times a second WaveChannel32.Read and about 5-10times bufferedWaveProvider.Read?

Second questions is: Why my whole computer lags if your wpf demo plays a song?

 

EDIT: And sorry about my bad english but i m german and it is often really difficult to explain you what I meen

May 18, 2011 at 9:22 PM

WaveChannel32.Read should not be called 44100 times a second unless you are playing at a really low latency! Are you sure it is called that often?

What spec is your PC? The WPF demo is visualising using FFT and drawing a waveform, so it can be a little taxing on the processor as the drawing code isn't particularly optimized.

Mark

May 18, 2011 at 9:37 PM
markheath wrote:

you should get to 1024 very quickly. Add needs to be called for every sample, and there should be 44100 of them a second. When are you calling Add?

oh i thought you said that there are 44100?

filoe

May 18, 2011 at 9:50 PM

there are 44100 samples per second if you have opened your soundcard at 44.1kHz. The sample aggregator has to look at them all to draw the waveform and calculate the FFT.

May 20, 2011 at 7:38 PM
markheath wrote:

If I am understanding what you are doing right, you are passing a BufferedWaveProvider into the WaveChannel32. So when the sound card wants more data, it calls WaveChannel32.Read. WaveChannel32.Read will call a LoadNextChunk and the sample chunk converter will go off and Read from the BufferedWaveProvider to get more data.

Mark

ok but how to use a wavechannel instead of a bufferedwaveprovider? I have to use the bufferedwaveprovider because of the memory. So i can free the played bytes and thats not possible with the wavechannel32. :)

May 20, 2011 at 8:41 PM

What do you mean that's not possible with WaveChannel32? It only holds on to the bytes it needs for each Read and then forgets them. It is how the WPF demo works.

Mark

May 25, 2011 at 10:02 PM

ok i don t get it. Could you explain again to me how to get the data of a bufferedwaveprovider which is buffering from network while playing into a wavechannel32?

And 2nd question. Does it makes sense to programme something for a player that should work without taxing on the processor.

May 25, 2011 at 10:06 PM

with NAudio you construct a playback graph. So for example BufferedWaveProvider -> WaveChannel32 -> MeteringStream -> WaveOut.

Also, I'm not sure what you mean about "without taxing the processor" - any audio sample manipulation and visualisation you do will be done on the processor. There is no way to avoid this.

Mark

May 25, 2011 at 10:20 PM
Edited May 25, 2011 at 10:20 PM

hmm if I use something like bass.dll or winamp or something else and I have a look on the tastmrg there is a number of 2-3 in cpu collum but if I have a look on that in your wpf sample there are about 50...

and i know what you mean with BufferedWaveProvider -> WaveChannel32 -> MeteringStream -> WaveOut.

But which konstruktor or method does i have to use to pass the BufferedWaveProvider into the WaveChannel32 exactly <-- I am just a bit confused at the moment :D

May 25, 2011 at 10:26 PM
Edited May 25, 2011 at 10:27 PM

Ah yes, WaveChannel32 is expecting a WaveStream and BufferedWaveProvider is only an IWaveProvider. You can make an adaptor class, inheriting from WaveStream and in the Read just passing through from IWaveProvider. In the next NAudio, WaveChannel32 will be a less important part of the framework.

As for performance, I suspect it is due to very inefficient visualisation code - the WPF waveform was just a proof of concept. A WriteableBitmap would probably be a much better performing solution. Also, managed code will always struggle to compete with native C/C++ for things like audio, since .NET introduces some extra overhead (garbage collect, copying instead of casting, out of bounds checking on every array access etc).

May 26, 2011 at 1:05 PM
Edited May 26, 2011 at 6:19 PM

thanks I have another idea:

I recieve bytes from network is it possible to write them directly into the wavechannel32 instead of the bufferedwaveprovider?

 

and 2nd question:
would it make sense to make a c++ dll that recieves the data of the spectrum from my c# application and draws the waveform with on the handle of my panel or something else. Or second idea I recieve a Image from the c++ dll and draw that on my pricturebox.

 

3rd questoin:
How can i create an emty instance of a WaveStream to set that in WaveChannel32 as sourcestream to write in.

May 27, 2011 at 2:49 PM

pls help me :)

May 27, 2011 at 9:16 PM

Well that is what the buffered wave provider is for - feeding audio data from the network into an audio pipeline.

2nd - you could do, although it would probably be simpler just to write a better optimised waveform renderer in C#. As I said, the method of drawing used is not an efficient one.

3rd - You create a class derived from WaveStream

Jun 2, 2011 at 4:46 PM

i dont get it :(

could you may write a little sample how to connect the bufferedwaveprovider with a wavechannel32

Jun 6, 2011 at 9:42 AM

I'm writing lots of new NAudio demos for the next version. I'll try to include something like this in the list

Jun 8, 2011 at 10:14 PM

when do you think your next version comes out?

Jun 8, 2011 at 10:18 PM

I don't have a definite date I'm afraid, but I'm hoping sometime over the summer as a lot of new stuff has gone into NAudio 1.5 already and it would be good to get it out there. NAudio is just a hobby project I do in my spare time and it is quite hard keeping up with all the support requests these days (I think StackOverflow is sending a lot of people my way)

Mark

Jun 10, 2011 at 5:25 PM
Edited Jun 13, 2011 at 12:15 PM

ok i ve done it now thanks a lot :)