Equalizer

Dec 7, 2010 at 1:27 PM

Hi,

Is there an easy way to implement/use an equalizer (not the GUI, just the audio processing part) in NAudio?

By Equalizer I mean an ability to control different frequency bands (say 5) ranging from bass to treble.

Thanks,

Yuval

Coordinator
Dec 10, 2010 at 2:53 PM

NAudio doesn't include an equaliser, but one way to go would be to use an effect framework with NAudio like in the Skype Voice Changer project (skypefx.codeplex.com)

Mark

Dec 10, 2010 at 3:23 PM

Hi Mark,

Thanks for the lead - I'll take a look.

Just a friendly warning - last two times you told me something doesn't exist in NAudio (WMA, OGG) they ended up existing... ;-)

Yuval

Dec 10, 2010 at 4:01 PM

Looked at the code. Brilliant work! It works great out of the box!

Now I really don't understand why effects are not part of NAudio.. It is just calls for EffectStream to be included.

The only thing I would change is separating the effects from the effects UI (sliders, etc). I think they should not be mixed up.

I'll try to work on making this an NAudio module/add-on during this week end.

Yuval

Coordinator
Dec 10, 2010 at 4:03 PM

the effects are this way in SkypeFx because they were modelled on the effects framework from REAPER. I do plan to bring something similar into NAudio, but it is easy to add yourself in the meantime.

Mark

Dec 11, 2010 at 3:47 AM
Edited Dec 13, 2010 at 9:58 PM

Hi Mark,

I now have an equalizer (Three Band) working with NAudio. It was easy to use the code from SkypeFX. Thanks for the great lead!

It works like a charm and seems to have better sound than SkypeFX - In SkypeFX there were many ticks/noises when I applied the Three Band effect with high gain/drive values, but I don't have these any longer with the new implementation.

I changed the original Process16Bit routine to ApplyDSPEffects so that it handles floats natively (Using the ByteAndFloatsConverter technique). and I think this might have helped.

I also changed the class names of Effect to DSPEffect and Slider to DSPEffectFactor. Slider is a GUI-y name which is kind of confusing as it really resembles a factor in the DSP effect.

The processing should be also faster as there is less processing:

 

        /// <summary>
        /// Applies the DSP Effects in the effects chain
        /// </summary>
        /// <param name="buffer"></param>
        /// <param name="count"></param>
        private void ApplyDSPEffects( float[] buffer, int count)
        {
            int samples = count * 2;
            foreach (DSPEffect effect in m_dspEffects)
            {
                if (effect.Enabled)
                {
                    effect.Block(samples);
                }
            }

            for(int sample = 0; sample < samples; sample+=2)
            {
                // get the sample(s)
                float sampleLeft = buffer[sample];
                float sampleRight = buffer[sample + 1];
               
                // run these samples through the effects chain
                foreach (DSPEffect effect in m_dspEffects)
                {
                    if (effect.Enabled)
                    {
                        effect.Sample(ref sampleLeft, ref sampleRight);
                    }
                }

                // put them back
                buffer[sample] = sampleLeft;
                buffer[sample + 1] = sampleRight;
            }
        }

Please let me know if you would like me to create a small standalone demo that shows usage of the Equalizer effect (the most useful effect IMO) with NAudio.

 

Edit (12/13/2010): The latest version (Beta 4) of Practice# was released with the Three Band Equalizer, for those you want to see how NAudio works with an equalizer.

http://code.google.com/p/practicesharp/downloads/list

 

Once again I thank you for writing excellent re-usable code.

Thanks,

Yuval 

Editor
Dec 30, 2010 at 12:09 PM

Hi Yuval,

Did you create that small demo application on how to chain the Equalizer effect in to NAudio?

Cheers,

Sebastian

Dec 30, 2010 at 1:10 PM

Hi Sebastian,

No, I didn't create that demo - Mark did not reply, so I thought there is no demand for it.

If you'd like I could easily arrange such a demo application.

Thanks,

Yuval

Editor
Dec 30, 2010 at 1:19 PM

Hi Yuval,

Yes please, that would be cool.

I had a quick look at your application, looks good - do you do the speed up and slow down of the audio play back using NAudio?

Cheers,

Sebastian 

Dec 30, 2010 at 1:26 PM

Sure, will do that in the next day or so.

 

As for Practice# - thanks. I change the playback speed using SoundTouch. (http://www.surina.net/soundtouch/)

There is wrapper I wrote, SoundTouchSharp, that allows .NET to use that native library.

NAudio is the audio framework/infrastructure - used for playback control, buffers, file readers but not for speed/pitch change.

BTW: I documented my design decisions and choice of libraries in a CodeProject article:

http://www.codeproject.com/KB/audio-video/practice_sharp.aspx

Yuval

Editor
Dec 30, 2010 at 1:36 PM

Cool, I'll have a look at that article, looks really interesting.

Cheers,
Sebastian 

Dec 31, 2010 at 2:48 AM
Edited Dec 31, 2010 at 12:10 PM

Hi Sebastian,

I just uploaded a demo of NAudio using an Equalizer (sources+binary).

It could be a nice tutorial.

The sources are based of Mark's SkypeFX library.

A few important remarks:

1. The DSPEffectStream class is currently expecting IeeeFloat streams. IMO working with floats is cleaner and more accurate compare to PCM conversion. So the equalizer effect is chained AFTER the WaveChannel32.

This can be changed of course to support PCM but I do not have time to do it, and I found the original PCM version to be problematic (sound quality/noises).

2. DSPEffectStream  supports only one DSPEffect. That was intentional - Advanced Chaining of effects can be added like Mark did in SkypeFX but I felt it was not so much needed, since NAudio allows chaining as it is.

3. This is demo code - I did not handle threading issues for example. (BTW: This whole demo was especially written for the purpose of showing simple usage of the equalizer, my application Practice# did not require these threading protections since it was using a single audio processing thread).

http://code.google.com/p/practicesharp/downloads/detail?name=NAudioEqualizer.zip

 

Thanks,

Yuval

Jan 17, 2011 at 7:42 PM

Hello Yuval, Mark, et.al.,

So I've been using NAudio with the EQ code from Skype, with Yuval's modifications. I'm not sure if I'm interpreting it's use correctly, but if I am it's not behaving as expected. I was wondering if anyone else has tried to verify the behavior of the EQ's using a frequency analysis tool.

So, for my particular application, I'd like to be able to specify a frequency and gain, and then have roughly an octave band around that frequency raised. In order to do this, I'm setting the LoMedFrequencyFactor and MedHiFrequencyFactor at values surrounding the central frequency, and then boosting the MedGainFactor. If my underestanding is correct, the example code below should boost the frequencies in the octave centered around 1kHz. But what I'm getting is more like all the frequencies below 640 boosted. I've tested other scenarios as well, boosting the LoGainFactor or HiGainFactor instead, and with different frequencies, and I don't get what I would expect.

I tested this using white noise, and I have before and after images of the freq. analysis I can send someone if they are interested. I don't see a way to attach them here. 

        public void ApplyEQTest()
        {

            _player.Pause();
            _eqEffect.LoDriveFactor.Value = 0;
            _eqEffect.MedDriveFactor.Value = 0;
            _eqEffect.HiDriveFactor.Value = 0;
            _eqEffect.LoMedFrequencyFactor.Value = 640;
            _eqEffect.MedHiFrequencyFactor.Value = 1280;
            _eqEffect.LoGainFactor.Value = 0;
            _eqEffect.HiGainFactor.Value = 0;
            _eqEffect.MedGainFactor.Value = 24;
            _eqEffect.OnFactorChanges();
            _player.Volume = 1.0f;
            _player.Play();

        }
Thanks,
Rob
Jan 17, 2011 at 7:59 PM

Hi Rob,

I think the drive factors should not be zero.

In my test application I did not change the default value of Drive properties, just played with Gain properties. 

Also the gain maximum is by default 12db not 24.

I would suggest that you take my test application and try playing with it first, with several iterations - that is, change one parameter at a time until you get you you want.

That's how I fine tuned the Equalizer for Practice# - I tested it with the test app.

HTH,

Yuval

Jan 17, 2011 at 9:45 PM

Hi Yuval,

I have tried it with different drive properties - I only put those values in the code to test different options since the defaults weren't producing an accurate response. I did change the default max/min on the gain so that I could see a bigger impact.

Can you tell me if you've actually verified the frequency response? Do you believe my assumptions about how the MedGainFactor  should work (boosting only in between the LoMed and MedHi frequency factors) correct?

Thanks.

Jan 17, 2011 at 10:22 PM

Hi Rob,

 I did not write the original code - I modified and re-factored it.

The only actual frequency test I performed was a simple one - use my hearing, not with any scope.

If you run Practice# and play with the equalizer it actually works. Raise the medium track bar and you will hear a very pronounced medium boost.

This is done by playing with MedGainFactor.

But I'm not sure if changing the frequency values actually has such a dramatic effect on the equalizer.

This is the only place in the code where there is a reference to the frequencies:
            // Low frequency            al = Min(LoMedFrequencyFactor.Value, SampleRate) / SampleRate;            

           // High frequency            ah = Max(Min(MedHiFrequencyFactor.Value, SampleRate) / SampleRate, al);

 

Perhaps Mark can help with this.

Thanks,

Yuval

 

Coordinator
Jan 20, 2011 at 10:51 AM

I'm afraid I don't know how this particular equaliser works. You might be better asking on the KVR dsp forum for details about the workings of a particular equalizer implementation

Mark

Jan 20, 2011 at 7:44 PM

Rob,

I concur with what Mark wrote.

Maybe you could try to look at another equalizer implementation. For example:

http://www.musicdsp.org/archive.php?classid=3#236

 

You could try to port it to C# instead of the current one.

 

Thanks,

Yuval

Jan 21, 2011 at 9:42 PM

Thanks very much guys for your responses. I'll look into that. The particular app I'm working on requires something particularly specific.  I'll let you know if I have any success.

Jan 21, 2011 at 9:53 PM

No worries, sorry I couldn't help more than that - not a DSP guy.

But if you create something good, let us know..I might be borrowing it for myself ;)

Good luck,

Yuval

Aug 19, 2013 at 7:25 PM
Question, how easy is it to expand the 3-band equalizer to multiple bands?
Aug 19, 2013 at 8:23 PM
I don't know how easy it is, as I haven't coded and tested it yet for more bands.
But looking at the code (below, found in the file EqualizerEffect.cs), it seems that with some fiddling with the equations (i.e. some algebra, and adding a loop instead of hard coded low, medium and high) one can add more than 3 bands with not so much effort:
  public override void Sample(ref float spl0, ref float spl1)
        {
            float dry0 = spl0;
            float dry1 = spl1;

            float lf1h = lfh;
            lfh = dry0 + lfh - ah * lf1h;
            float high_l = dry0 - lfh * ah;

            float lf1l = lfl;
            lfl = dry0 + lfl - al * lf1l;
            float low_l = lfl * al;

            float mid_l = dry0 - low_l - high_l;

            float rf1h = rfh;
            rfh = dry1 + rfh - ah * rf1h;
            float high_r = dry1 - rfh * ah;

            float rf1l = rfl;
            rfl = dry1 + rfl - al * rf1l;
            float low_r = rfl * al;

            float mid_r = dry1 - low_r - high_r;

            float wet0_l = mixlg * Sin(low_l * HalfPiScaled);
            float wet0_m = mixmg * Sin(mid_l * HalfPiScaled);
            float wet0_h = mixhg * Sin(high_l * HalfPiScaled);
            float wet0 = (wet0_l + wet0_m + wet0_h);

            float dry0_l = low_l * mixlg1;
            float dry0_m = mid_l * mixmg1;
            float dry0_h = high_l * mixhg1;
            dry0 = (dry0_l + dry0_m + dry0_h);

            float wet1_l = mixlg * Sin(low_r * HalfPiScaled);
            float wet1_m = mixmg * Sin(mid_r * HalfPiScaled);
            float wet1_h = mixhg * Sin(high_r * HalfPiScaled);
            float wet1 = (wet1_l + wet1_m + wet1_h);

            float dry1_l = low_r * mixlg1;
            float dry1_m = mid_r * mixmg1;
            float dry1_h = high_r * mixhg1;
            dry1 = (dry1_l + dry1_m + dry1_h);

            spl0 = dry0 + wet0;
            spl1 = dry1 + wet1;
        }
Aug 21, 2013 at 12:53 PM
I was thinking about that, but I guess it requires a lot of testing and comparing against a known working sample. I just wish there was an easy way to incorporate Audacity's equalizer into the NAudio framework as it appears it does some sort of formula based on the parametric and sends audio through this formula, but it doesn't look similar to this at all.

I guess the initialization routine takes each of the bands and calculates some fixed parameters which are applied to each sample in the "Sample" routine.

What do each of these parameters mean, the dry parameters and wet parameters? I assume they have meaning and the selection of these names was not random??

Thanks
Paul
Coordinator
Aug 21, 2013 at 2:15 PM
a signal with no effects is called "dry", a signal with effects is called "wet". With some effects, you mix the dry in with the wet (e.g. reverb). It's not so common with EQ though.
Dec 1, 2013 at 6:09 AM
yuvalnv wrote:
Hi Sebastian, I just uploaded a demo of NAudio using an Equalizer (sources+binary). It could be a nice tutorial. The sources are based of Mark's SkypeFX library. A few important remarks: 1. The DSPEffectStream class is currently expecting IeeeFloat streams. IMO working with floats is cleaner and more accurate compare to PCM conversion. So the equalizer effect is chained AFTER the WaveChannel32. This can be changed of course to support PCM but I do not have time to do it, and I found the original PCM version to be problematic (sound quality/noises). 2. DSPEffectStream  supports only one DSPEffect. That was intentional - Advanced Chaining of effects can be added like Mark did in SkypeFX but I felt it was not so much needed, since NAudio allows chaining as it is. 3. This is demo code - I did not handle threading issues for example. (BTW: This whole demo was especially written for the purpose of showing simple usage of the equalizer, my application Practice# did not require these threading protections since it was using a single audio processing thread). http://code.google.com/p/practicesharp/downloads/detail?name=NAudioEqualizer.zip   Thanks, Yuval
I am using nAudio in a project and wanted to implement a 3 band EQ as well, but since you already have on in your equalizer demo, i just borrowed that. it appears to work well, but I am seeing a strange problem. If i init my waveOut device with a DSPEffectStream, then my PlaybackStopped event handler is never called when my audio file finishes. if I use the stream from an AudioFileReader, then the event handler is clled just fine when the file finishes.

is this a known issue?
Coordinator
Dec 2, 2013 at 11:13 AM
if PlaybackStopped is not firing, it may be that DSPEffectStream is never-ending. The Read method must return 0 for playback to end.
Dec 4, 2013 at 8:41 PM
It appears that the Read method in DSPEffectStream is returning the same, non-zero value each time it is called. I just took the implementation that yuvalnv posted in the naudio-equalizer demo, so I am not sure if my code is at fault or if it's the demo code.

FWIW, the implementation looks like this:
        public override int Read(byte[] buffer, int offset, int count)
        {
            int bytesRead = SourceStream.Read(buffer, offset, count);

            if (ActiveDSPEffect.Enabled)
            {
                if (SourceStream.WaveFormat.Encoding == WaveFormatEncoding.IeeeFloat)
                {
                    ByteAndFloatsConverter convertInputBuffer = new ByteAndFloatsConverter { Bytes = buffer };
                    ProcessDataIeeeFloat(convertInputBuffer, offset, bytesRead);
                }
                else
                {
                    // Do not process other types of streams
                }
            }

            return bytesRead;
        }
SourceStream is a WaveStream object passed to DSPEffectStream as a construction parameter.

I will continue monkeying around with this to see if I can get it to work, but I am not that familiar with nAudio at this point.
thanks
Dec 4, 2013 at 10:06 PM
DSPEffectStream is just returning the bytesRead from the SourceStream, i.e. the external WaveStream provided to the constructor.
So it seems it is not DSPEffectStream that is not ending but rather the SourceStream.

" If i init my waveOut device with a DSPEffectStream, then my PlaybackStopped event handler is never called when my audio file finishes. if I use the stream from an AudioFileReader, then the event handler is called just fine when the file finishes. "

It seems like the waveOut device you pass is not returning zero, where as the AudioFileReader is returning zero when reaching end of stream.

Can you show how the waveOut device was created and initialized?

HTH
Yuval
Dec 5, 2013 at 7:28 PM
I am trying to integrate nAudio into an XNA game framework application, so the application is split into multiple classes and methods. here's a basic summary. please excuse the poor commenting, structure and variable names; this is just quick test code :
public class Game1 : Microsoft.Xna.Framework.Game
{
        IWavePlayer waveOut;
        WaveStream waveStream;
        WaveChannel32 waveChannel;
        EqualizerEffect eqEffect;

        AudioFileReader file;

        protected override void Initialize()
        {
            waveOut = new WaveOut();
            waveOut.PlaybackStopped += waveOut_PlaybackStopped;
        }

        public void LoadFile(string filename)
        {
            file = new AudioFileReader(filename);

            waveChannel = new WaveChannel32(file);

            eqEffect = new EqualizerEffect();

            eqEffect.HiGainFactor.Value = eqEffect.HiGainFactor.Maximum;
            eqEffect.MedGainFactor.Value = 0;
            eqEffect.LoGainFactor.Value = eqEffect.LoGainFactor.Minimum;

            eqEffect.OnFactorChanges();

            waveStream = new DSPEffectStream(waveChannel, eqEffect);

            waveOut.Init(waveStream);

            file.Volume = 0.5f;
        }

        protected override void LoadContent()
        {
            // Create a new SpriteBatch, which can be used to draw textures.
            spriteBatch = new SpriteBatch(GraphicsDevice);

            text = Content.Load<SpriteFont>("text");

            LoadFile("06 - Swoon.mp3");

            waveOut.Play();
        }

        void waveOut_PlaybackStopped(object sender, StoppedEventArgs e)
        {
            // load the next file in the playlist; unimplemented for now
            LoadFile("04 - Dissolve.mp3");

            waveOut.Play();
        }

Dec 16, 2013 at 11:50 PM
bump.
Dec 17, 2013 at 7:24 PM
Sorry, I don't know how to help you based on this.

If you could recreate the problem in a small demo, so that it runs from a command line, no XNA etc, then it would be much easier to find the root cause.
I have done the same thing in the past, when I wanted to debug or show a problem.

I'm also wondering if the MP3 files have anything to do with this issue. Anyway, a stand alone demo is the best way IMO to move forward.