NAudiohttp://naudio.codeplex.com/project/feeds/rssA .NET library for writing audio related applications, with the ability to record, play and manipulate sample data.New Post: Transefer function https://naudio.codeplex.com/discussions/662994<div style="line-height: normal;">Girl, you got no idea what you´re doing. Besides that and your load of spelling mistakes, as I said initially it is NOT a trivial task. I don´t even know myself exactly how to do it.<br />
</div>FreefallThu, 25 May 2017 14:41:13 GMTNew Post: Transefer function 20170525024113PNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)https://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">OIC!!!.... I don't have to do fft for apply, just what I have to do is from my chart of function se treshold - is first poin for -values and last point for +value of my input and knee of effect and I don't realy understant how i can implement my other nonlinearities - other points in chart... I will think about it...<br />
</div>KessaThu, 25 May 2017 11:57:10 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170525115710ANew Post: Transefer function http://naudio.codeplex.com/discussions/662994<div style="line-height: normal;">So, I tried to do FFT of my wave file and than apply my poly transfer function:
<br />
<br />
void FftCalculated(object sender, FftEventArgs e)<br />
<pre><code> {
//y[n] = A*x^3[n] + B^2[n] + C*x[n] + DI; // my transfer function
for (int i = 0; i < e.Result.Length; i++)
{
fftResultX[i] = e.Result[i].X; // result of fft for Complex.X
fftResultY[i] = e.Result[i].Y; // result of fft for Complex.Y
fftResultH[i].X = A * (e.Result[i].X) * (e.Result[i].X) * (e.Result[i].X) + B * (e.Result[i].X) * (e.Result[i].X) + C * (e.Result[i].X)+D; //distortion - fit my fft result to my transfer function
fftResultH[i].Y = A * (e.Result[i].Y) * (e.Result[i].Y) * (e.Result[i].Y) + B * (e.Result[i].Y) * (e.Result[i].Y) + C * (e.Result[i].Y) + D;
}
}</code></pre>
where fftResultH is spectrum with distortion effect of my transfer function. And now I want to playing my effected sound in real time / when I open my wave fine and press my play botton I want to hear effected sound, where I can changing parameters A,B,C,D of my transfer function in realtime... Can I do it this way?
<br />
I'm using SampleAggregator. <br />
</div>KessaThu, 25 May 2017 06:48:16 GMTNew Post: Transefer function 20170525064816ANew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)http://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">Yes, my polyfuction is doing distortions and I want to changing parametres functionVarI, functionVarH, functionVarG, functionVarF etc. in realtime by dragging and moving points in chart of my poly function and in real time counting this parameters and play destroyed sound...
<br />
</div>KessaThu, 25 May 2017 06:28:26 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170525062826ANew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)http://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">As I said before, I assume your poly function is doing the distortions.
<br />
<br />
<a href="https://skypefx.codeplex.com/" rel="nofollow">Here</a> is an example how to do realtime effects with NAudio.<br />
</div>FreefallWed, 24 May 2017 19:08:06 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170524070806PNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)http://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">As I said before, I assume your poly function is doing the distortions.
<br />
<br />
<a href="https://skypefx.codeplex.com/" rel="nofollow">Here</a> is an example how to do realtime effects with NAudio.<br />
</div>FreefallWed, 24 May 2017 19:08:06 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170524070806PNew Post: How can I normalize my volume of my recorded wav file?http://naudio.codeplex.com/discussions/663048<div style="line-height: normal;">Simply loop through each sample and find the max sample. Finally loop through each sample again and multiply by (1/max sample).<br />
</div>FreefallWed, 24 May 2017 19:04:41 GMTNew Post: How can I normalize my volume of my recorded wav file? 20170524070441PNew Post: How can I normalize my volume of my recorded wav file?http://naudio.codeplex.com/discussions/663048<div style="line-height: normal;">Simply loop through each sample and find the max sample. Finally loop through each sample again and multiply by (1/max sample).<br />
</div>FreefallWed, 24 May 2017 19:04:41 GMTNew Post: How can I normalize my volume of my recorded wav file? 20170524070441PNew Post: Fast Forwarding and Rewindhttp://naudio.codeplex.com/discussions/663052<div style="line-height: normal;">You can do that with adjustable resampling. E.g. use WDLResamplingSampleProvider and modify the Output SampleRate while playing.<br />
</div>FreefallWed, 24 May 2017 19:02:42 GMTNew Post: Fast Forwarding and Rewind 20170524070242PNew Post: Fast Forwarding and Rewindhttp://naudio.codeplex.com/discussions/663052<div style="line-height: normal;">You can do that with adjustable resampling. E.g. use WDLResamplingSampleProvider and modify the Output SampleRate while playing.<br />
</div>FreefallWed, 24 May 2017 19:02:42 GMTNew Post: Fast Forwarding and Rewind 20170524070242PNew Post: Transefer function http://naudio.codeplex.com/discussions/662994<div style="line-height: normal;">I´m not deep into the the maths, but what I know so far is:<br />
<br />
Each bin of the FFT result is equally spaced in the frequency spectrum.<br />
<br />
For example when SampleRate is 44100 Hz and FFT Size is 1024:<br />
<pre><code>frequency[0]=0*44100/(1024*2) // First bin is always 0 Hz, called the DC offset
frequency[1]=1*44100/(1024*2) //Second bin, here at 22 Hz
frequency[2]=2*44100/(1024*2) //Third bin, here at 44 Hz
....
frequency[1024]=1024*44100/(1024*2) //Last bin is always half SampleRate, here 22050 Hz</code></pre>
So just loop over the FFT Result and calculate the frequency and amplitude.<br />
</div>FreefallWed, 24 May 2017 18:43:07 GMTNew Post: Transefer function 20170524064307PNew Post: Transefer function http://naudio.codeplex.com/discussions/662994<div style="line-height: normal;">I´m not deep into the the maths, but what I know so far is:<br />
<br />
Each bin of the FFT result is equally spaced in the frequency spectrum.<br />
<br />
For example when SampleRate is 44100 Hz and FFT Size is 1024:<br />
<pre><code>frequency[0]=0*44100/(1024*2) // First bin is always 0 Hz, called the DC offset
frequency[1]=1*44100/(1024*2) //Second bin, here at 22 Hz
frequency[2]=2*44100/(1024*2) //Third bin, here at 44 Hz
....
frequency[1024]=1024*44100/(1024*2) //Last bin is always half SampleRate, here 22050 Hz</code></pre>
So just loop over the FFT Result and calculate the frequency and amplitude.<br />
</div>FreefallWed, 24 May 2017 18:43:07 GMTNew Post: Transefer function 20170524064307PNew Post: Transefer function https://naudio.codeplex.com/discussions/662994<div style="line-height: normal;">Hello, so I tried to do FFT with use of SampleAggregator.cs, but I'm not shure, if I realy understand class SampleAggregator - Is that instance, witch take samples from my wave file (or any input), and apply FFT algorythm for every single sample of my wave file - this is the method Add in this class -right? And length of FFT means, how large is step between couted frequencies in spectrum? For example if is frequency of my point[n] = 100Hz, frequency of point[n+1] is my sample rate/ FFT length, so if I've got sample rate 44100 and FFT length 1024, point[n+1] = point[n]+(44100/1024), so if my point[n] is 100 my point[n+1]= 143,066.... ??? I'm right or absolutly dumb...?? :-D
<br />
Thanks for answer.<br />
</div>KessaWed, 24 May 2017 12:53:47 GMTNew Post: Transefer function 20170524125347PNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)https://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">Could I ask? I this project I want to apply waveshaper effect and but with changing transfer function - getFunction(double x) - is my stransfer function, and I get I when I take position of points from my chart and apply Polyregression to get polynomial function equationVar = functionVarA * (Math.Pow(x, 8)) + functionVarB * (Math.Pow(x, 7)) + functionVarC * (Math.Pow(x, 6)) + functionVarD * (Math.Pow(x, 5)) + functionVarE * (Math.Pow(x, 4)) + functionVarF * (Math.Pow(x, 3)) + functionVarG * (Math.Pow(x, 2)) + functionVarH * x + functionVarI.
<br />
And than I want to play this effected destroyed sound - when I click play button I want to hear effected sound insted of my wav file sound, and still I want to have chance to changing transfer function when I played audio.
<br />
And my problem is / If I apply this function on read method - I can't hear fluent effected sound.
<br />
So I aks. Is any solution for waveshaper effect in NAudio?
<br />
I'll be soooo gratefull for answer. Thank you so much... <br />
</div>KessaTue, 23 May 2017 10:13:10 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170523101310ANew Post: Fast Forwarding and Rewindhttp://naudio.codeplex.com/discussions/663052<div style="line-height: normal;">Hi All,
<br />
<br />
Please help me how to do fastforwarding and rewind using naudio.<br />
</div>sivaram_mscssTue, 23 May 2017 07:41:18 GMTNew Post: Fast Forwarding and Rewind 20170523074118ANew Post: How can I normalize my volume of my recorded wav file?https://naudio.codeplex.com/discussions/663048<div style="line-height: normal;">I hope someone can help me, I have a recorded wav file which I did already sent through the SimpleCompressor class like that:<br />
<pre><code> WaveFileReader audio = new WaveFileReader(strFile);
string strCompressedFile = "";
byte[] WaveData = new byte[audio.Length];
SimpleCompressorStream Compressor = new SimpleCompressorStream(audio);
Compressor.Enabled = true;
if (Compressor.Read(WaveData, 0, WaveData.Length) > 0)
{
strCompressedFile = "xxx.wav";
WaveFileWriter CompressedWaveFile = new WaveFileWriter(strCompressedFile, audio.WaveFormat);
CompressedWaveFile.Write(WaveData, 0, WaveData.Length);
CompressedWaveFile.Flush();
} </code></pre>
Afterwards I need to do some normalization of the volume but I have no idea how to do that. Is there any function in naudio for that like the compressor class? If not what do I have to do?<br />
</div>Ever2007Mon, 22 May 2017 17:36:41 GMTNew Post: How can I normalize my volume of my recorded wav file? 20170522053641PNew Post: how to use SimpleCompressorStream?https://naudio.codeplex.com/discussions/662999<div style="line-height: normal;">I finally managed to make it work that way:<br />
<pre><code>WaveFileReader audio = new WaveFileReader(strFile);
string strCompressedFile = "";
byte[] WaveData = new byte[audio.Length];
SimpleCompressorStream Compressor = new SimpleCompressorStream(audio);
Compressor.Enabled = true;
if (Compressor.Read(WaveData, 0, WaveData.Length) > 0)
{
strCompressedFile = "xxx.wav";
WaveFileWriter CompressedWaveFile = new WaveFileWriter(strCompressedFile, audio.WaveFormat);
CompressedWaveFile.Write(WaveData, 0, WaveData.Length);
CompressedWaveFile.Flush();
} </code></pre>
</div>Ever2007Mon, 22 May 2017 16:28:33 GMTNew Post: how to use SimpleCompressorStream? 20170522042833PNew Post: Request for some directionhttp://naudio.codeplex.com/discussions/663029<div style="line-height: normal;">I've successfully used NAudio to process a UDP incoming stream,by defining a signal chain composed of a BufferedWaveProvider, followed by a SampleChannel, followed by a WdlResamplingSampleProvider, then finally to a WaveOut device to play the audio stream on a USB audio device. Now my question. I am externally connecting the output of my USB audio device to the input of a pro audio card which has an ASIO driver using PortAudio. I'd like to not use this external arrangement, but instead patch the stream from the the Resampler directly to the code that supports the processing from the PortAudio code. Is anyone willing to provide some direction for me to experiment with? I can provide more details. Basically I'd like to do away with the PortAudio stuff and patch the Resampler output directly to other signal processing code I have which is DttSP. Regards, Karin <br />
</div>KarinAnneSat, 20 May 2017 00:21:23 GMTNew Post: Request for some direction 20170520122123ANew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#)http://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">Your Read method seems correct to me, so I assume your "getFunction" is causing the problems.
<br />
<br />
Also you can discard BlockAlignReductionStream, as it is not needed here.<br />
</div>FreefallFri, 19 May 2017 15:32:48 GMTNew Post: DirectSoundOut isn't fluent stream after passing effect (NAudio, C#) 20170519033248PNew Post: DirectSoundOut isn't sluent stream after passing effect (NAudio, C#)http://naudio.codeplex.com/discussions/663016<div style="line-height: normal;">Hi guys,
<br />
I've got a problem with playing me stream by DirectSoundOut. I open same wav file by OpenFileDialog - It's Ok, Then read data from wav file and convert them to float, then I apply my function for effecting that samples, and conver them to byte[] and read - and now start the problem when the stream of effected data is in DirectSoundOut - I can't hear fluent, effected sound but it sounds like one sample is played more times, its change when i change latency, but if I've got small latency, I can't hear sound and when I raise latency, I've got problem, which I described above.
<br />
I think, It becase:
<br />
1) This process is to time-consuming, and data are not being processed in this time, becase my function for effecting is transfer function, which can be variable in time - you got graph of this function and you can change possition of points of this graph and on this point is applied polynomial regression to get function with parametr, this method I call in Read method and fit samples to the equation.
<br />
2) I can't found the best latency...<br />
<pre><code> private BlockAlignReductionStream stream = null;
private NAudio.Wave.DirectSoundOut output = null;
private void button1_Click(object sender, EventArgs e)
{
OpenFileDialog open = new OpenFileDialog();
open.Filter = "Wave File (*.wav)|*.wav;";
if (open.ShowDialog() != DialogResult.OK) return;
textBox1.Text = open.FileName;
WaveChannel32 wave = new WaveChannel32(new WaveFileReader(open.FileName));
EffectStream effect = new EffectStream(wave);
stream = new BlockAlignReductionStream(effect);
output = new DirectSoundOut();
output.Init(stream);
output.Play();
button2.Enabled = true;
chart1.Enabled = true;
}
</code></pre>
this is part of code, where I try to open file and play effected file<br />
<blockquote>
</blockquote>
<pre><code> public float getFunction(float x)
{
double[] arrayX = new double[chart1.Series[0].Points.Count()]; // Získáme polohu bodů z grafu na ose x
double[] arrayY = new double[chart1.Series[0].Points.Count()];// Získáme polohu bodů z grafu na ose y
double[] arrayResult = { };
for (int i = 0; i < chart1.Series[0].Points.Count(); i++) // naplnění
{
arrayX[i] = (chart1.Series[0].Points[i].XValue);
arrayY[i] = (chart1.Series[0].Points[i].YValues[0]);
}
arrayResult = (PolyRegression.Polyfit(arrayX, arrayY, 8)); // instance of class PolyRegression for solving a system of equation
// System of equation, functionVarA-E are coefficients
double functionVarI = arrayResult[0];
double functionVarH = arrayResult[1];
double functionVarG = arrayResult[2];
double functionVarF = arrayResult[3];
double functionVarE = arrayResult[4];
double functionVarD = arrayResult[5];
double functionVarC = arrayResult[6];
double functionVarB = arrayResult[7];
double functionVarA = arrayResult[8];
double equationVar = 0;
equationVar = functionVarA * (Math.Pow(x, 8)) + functionVarB * (Math.Pow(x, 7)) + functionVarC * (Math.Pow(x, 6)) + functionVarD * (Math.Pow(x, 5)) + functionVarE * (Math.Pow(x, 4)) + functionVarF * (Math.Pow(x, 3)) + functionVarG * (Math.Pow(x, 2)) + functionVarH * x + functionVarI; // Transfer function
float Transfer = Convert.ToSingle(equationVar); //Convert to float
return Transfer; .
}
</code></pre>
this is code for getting transfer function from graph<br />
<pre><code>public class PolyRegression
{
public static double[] Polyfit(double[] x, double[] y, int degree)
{
// Count of coefficients from system of equations, plynimial regression
var v = new DenseMatrix(x.Length, degree + 1);
for (int i = 0; i < v.RowCount; i++)
for (int j = 0; j <= degree; j++) v[i, j] = Math.Pow(x[i], j);// v[i, j] - levá strana rovnice, Math.Pow(x[i],j) - pravá strana
var yv = new DenseVector(y).ToColumnMatrix();
QR qr = v.QR(); // triangle matrix
var r = qr.R.SubMatrix(0, degree + 1, 0, degree + 1);
var q = v.Multiply(r.Inverse());
var p = r.Inverse().Multiply(q.TransposeThisAndMultiply(yv));
return p.Column(0).ToArray();
}
}</code></pre>
this is how I solve coeffincients<br />
<pre><code>public override int Read(byte[] buffer, int offset, int count)
{
Console.WriteLine("DirectSoundOut requested {0} bytes", count);
int read = SourceStream.Read(buffer, offset, count);
for (int i = 0; i < read / 4 ; i++)
{
float sample = BitConverter.ToSingle(buffer, i * 4 );
sample = frm.getFunction(sample);
byte[] bytes = BitConverter.GetBytes(sample);
buffer[i * 4 + 0] = bytes[0];
buffer[i * 4 + 1] = bytes[1];
buffer[i * 4 + 2] = bytes[2];
buffer[i * 4 + 3] = bytes[3];
}
return read;
}</code></pre>
and this is my read function in EffectStream
<br />
<br />
So guys, heve you got any idea, what I have to do? I need hear fluent effected sound.
<br />
Thank you so much for your advices and comments. :-)<br />
</div>KessaFri, 19 May 2017 06:57:27 GMTNew Post: DirectSoundOut isn't sluent stream after passing effect (NAudio, C#) 20170519065727A