First steps on Vectorizing Audio Plugins: which Instruction Set do you use in 2018?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Same problem: max pitch shift error doesn't occurs at the same semitone having the max relative error.

I can't reproduce your results for pitch errors (to ensure I tested with C++ and Octave).
E.g. this minimal test (using your frequency data for the three notes) shows:

Code: Select all

pitch index:     46                 623                954
relative errors: 1.63845501223e-007 9.93477199191e-008 1.50108653945e-007
pitch errors:    2.83654887341e-006 1.71994146807e-006 2.59873232001e-006
I.e. pitch errors give the same result as relative freq mul errors, both showing 46 > 954 > 623.
(The relative error values are basically the same as yours so it's only your pitch errors being sort of up-side-down).
I can't see any mistakes in your code so it's most likely something wrong with the "output" itself (like "corrupted" from elsewhere, no idea what could it be though).

Post

Max M. wrote: Thu Jan 31, 2019 9:14 am I can't see any mistakes in your code so it's most likely something wrong with the "output" itself (like "corrupted" from elsewhere, no idea what could it be though).
Found the problem!!!

Here's the code I have:

Code: Select all

double refPitchShift = log(refFreqMul[i]) / ln2per12;
double approxPitchShift = log(approxFreqMul[i]) / ln2per12;
which are simply the ref and approx FreqMuls, not yet multiplied with Freq.

And here's your code:

Code: Select all

double refPitchShift = log(refFreq / freq) / ln2per12;
double approxPitchShift = log(approxFreq / freq) / ln2per12;
which are the Freq * both ref and approx FreqMul, divided than by Freq (which introduce fp "error" due to additional division).

But does it makes sense?
Is the test trustworthy?

Because (in fact) I never do that last division on processing signal...

********************************************************

Reasoning a bit more about this, what I did with:

Code: Select all

double refPitchShift = log(refFreqMul[i]) / ln2per12;
double approxPitchShift = log(approxFreqMul[i]) / ln2per12;
double pitchShiftError = abs(refPitchShift - approxPitchShift);
is basically to evalutate the absolute error of pitch, as you suggested before:
Max M. wrote: Sat Jan 26, 2019 12:15 am ...
For a pitch we'll need to use the absolute error instead of the relative one simply because the pitch scale itself is a relative (and logarithmic) scale. I.e. ±1 man (e.g. octave, semitone or cent) there means the same change (same impact) regardless of how low or high the reference note is.
But if I do this, it doesn't show to me the real worst error: it reports to me 14.3 pitch as worst (623), when instead is -43.4 pitch (46).

Post

Well, in my code I use `refFreq / freq` instead of the corresponding `refFreqMul` only because I do not have any refFreqMul[] data. And no, 64-bit precision of `(x * freq) / freq` in this case is accurate enough to not make these errors to swap almost exactly up-side-down like they do. I'm pretty sure the problem is elsewhere. (You could try and replace `refFreq / freq` in my snippet with your `refFreqMul` to see if anything changes significantly).

I'd probably suggest you to start with tracing of refPitchShift and approxPitchShift values for each note (of these three) to find if the data gets screwed before or after these lines of code.

Post

Max M. wrote: Thu Jan 31, 2019 12:33 pm Well, in my code I use `refFreq / freq` instead of the corresponding `refFreqMul` only because I do not have any refFreqMul[] data. And no, 64-bit precision of `(x * freq) / freq` in this case is accurate enough to not make these errors to swap almost exactly up-side-down like they do. I'm pretty sure the problem is elsewhere. (You could try and replace `refFreq / freq` in my snippet with your `refFreqMul` to see if anything changes significantly).

I'd probably suggest you to start with tracing of refPitchShift and approxPitchShift values for each note (of these three) to find if the data gets screwed before or after these lines of code.
Got the error, finally!!!!! My approxFreqMul is (due to the "accuracy test") float!!
When doing:

Code: Select all

double approxPitchShift = log(approxFreqMul[i]) / ln2per12;
it calls the float version of log.
Instead, using:

Code: Select all

double approxPitchShift = log(approxFreq / freq) / ln2per12;
it calls the double version of log (since approxFreq and freq are both double). And there was the truncation...

Doing this:

Code: Select all

double approxPitchShift = log((double)approxFreqMul[i]) / ln2per12;
Fixed it!!!
What a massacre :ud: :lol:

Now I'm going further on analysis, recap and get the general idea about the whole process (but first, let me open a beer).

Meanwhile, thanks for your huge efforts!!!!

Post

it calls the float version of log.

Ah, weirdly this was my very first thought - but then (when writing my snippet) I simply forgot that your approxFreqMul[] could be float, doh!

---
One last (minor) tip:

Code: Select all

double refPitchShift = log(refFreqMul[i]) / ln2per12;
This one is better to be replaced with the original pitch shift value (e.g. the value of the `note` variable in your snippet). Since we are more interested to find how the approximated pitch shift is different from the "ideal" one (and not a value went through pitch -> multiplier -> pitch conversion, even if it's doubles all the way through (at least for the sake of less cluttered code)).

Post

Max M. wrote: Thu Jan 31, 2019 4:28 pm it calls the float version of log.

Ah, weirdly this was my very first thought - but then (when writing my snippet) I simply forgot that your approxFreqMul[] could be float, doh!
Don't worry! Thanks for your effort, much appreciated!

Max M. wrote: Thu Jan 31, 2019 4:28 pm One last (minor) tip:

Code: Select all

double refPitchShift = log(refFreqMul[i]) / ln2per12;
This one is better to be replaced with the original pitch shift value (e.g. the value of the `note` variable in your snippet). Since we are more interested to find how the approximated pitch shift is different from the "ideal" one (and not a value went through pitch -> multiplier -> pitch conversion, even if it's doubles all the way through (at least for the sake of less cluttered code)).
(pedantic mode on)
Sure, even if it's not the "real ideal case", since:

Code: Select all

double note = inc / (double)step; // will introduce fp error (division)
double approxPitchShift = log((double)approxFreqMul[i]) / ln2per12; // will introduce fp error (division)
double pitchShiftError = abs(note - approxPitchShift); // will introduce fp error (maybe? not sure about this, but I know that 0.5 + 0.5 = 0.5 + 0.499999999999999944488848768742172978818416595458984375 in double)
Right? :P
(pedantic mode off)

Post

Max M. wrote: Thu Jan 31, 2019 9:14 am I can't reproduce your results for pitch errors (to ensure I tested with C++ and Octave).
E.g. this minimal test (using your frequency data for the three notes)
However, it seems there are some edge cases where this kind of analysys "fail".
I'm trying to compare the drift using double approx now instead of float.

In this case, I've tried with the double Pade exp one, but seems that some "conversion" fails.
For example, semitone -20.5: when I convert back to pitch, the fp is so tiny that it seems to snap to the same value after log (the same, using both log(refFreqMul) or log(refFreq / C0):

Code: Select all

double refFreqMul = 0.306013385826163830660817666284856386482715606689453125;
double apxFreqMul = 0.30601338582616388617196889754268340766429901123046875;
std::cout << log(refFreqMul) / ln2per12 << std::endl;
std::cout << log(apxFreqMul) / ln2per12 << std::endl;

double refFreq = 5.00393088502943061968153415364213287830352783203125;
double apxFreq = 5.00393088502943239603837355389259755611419677734375;
std::cout << log(refFreq / C0) / ln2per12 << std::endl;
std::cout << log(apxFreq / C0) / ln2per12 << std::endl;
Both go to -20.5, so the pitch drift error become 0.0.

Here's your code adapted with this use case (semitones -33.1 (149) and -20.5 (275)):

Code: Select all

#include <cmath>
#include <iostream>
#include <iomanip>

const double C0 = 16.352;
const double ln2per12 = std::log(2.) / 12;

// your ref and approx frequency values:
const int      index[] = { 149, 275 };
const double refFreq[] = { 2.416739314639490654457176788127981126308441162109375, 5.00393088502943061968153415364213287830352783203125 };
const double apxFreq[] = { 2.4167393146394910985463866381905972957611083984375, 5.00393088502943239603837355389259755611419677734375 };

// ............................................................

void testErrors() {
	using namespace std;
	cout.precision(13);
	const int n = sizeof(index) / sizeof(*index);

	cout << "pitch index:    ";
	for (int i = 0; i < n; i++)
		cout << " " << setw(18) << left << index[i];
	cout << endl;

	cout << "relative errors:";
	for (int i = 0; i < n; i++)
		cout << " " << abs(refFreq[i] - apxFreq[i]) / refFreq[i];
	cout << endl;

	cout << "pitch errors:   ";
	for (int i = 0; i < n; i++) {
		double a = log(refFreq[i] / C0) / ln2per12;
		double b = log(apxFreq[i] / C0) / ln2per12;
		cout << " " << abs(a - b);
	}
	cout << endl;
}

// ............................................................

int main() {
	testErrors();
	return 0;
}
Which output:

Code: Select all

pitch index:     149                275
relative errors: 1.837555284345e-16 3.549922811114e-16
pitch errors:    7.105427357601e-15 0
Which "fail":

275's relative error (3.549922811114e-16) > 149's relative error (1.837555284345e-16)
but
275's pitch error (0) < 149's pitch error (7.105427357601e-15)

Is that "pitch shift absolute" error valutation reliable?

Post

Right?
Yep.

Is that "pitch shift absolute" error evaluation reliable?

Well, notice that in this case the relative error is not reliable either since it's already around 64-bit fp epsilon. I.e both values should be considered not different from 0. Now since the code is supposed to look for a maximum error and not for a minimum (which should be 0 for pitch shift 0 anyway)... well.

(Curiously I can't reproduce these your results as well, but this time it's more likely because of different compiler/options/language-runtime/used-instruction-set - yet again because the values are on the edge of the double precision).

Hmm, probably in a half-pedantic mode - at quick glance (very roughly, I guess there're more solid methods): the absolute multiplier error (in double) should be first compared to float eps - and to be considered as 0 if it's below (i.e. the calculated multiplier already hits its 32-bit precision thus any subsequent error calculations make no sense since the real error will be higher anyway).

Post

Max M. wrote: Fri Feb 01, 2019 1:48 pm Right?
Yep.

Is that "pitch shift absolute" error evaluation reliable?

Well, notice that in this case the relative error is not reliable either since it's already around 64-bit fp epsilon. I.e both values should be considered not different from 0. Now since the code is supposed to look for a maximum error and for not a minimum (which is 0 for pitch shift 0 anyway)... well.

(Curiously I can't reproduce these your results as well, but this time it's more likely because of different compiler/options/language-runtime/used-instruction-set - yet again because the values are on the edge of the double precision).

Hmm, probably in a half-pedantic mode - at quick glance (very roughly, I guess there're more solid methods): the absolute multiplier error (in double) should be first compared to float eps - and to be considered as 0 if it's below (i.e. the calculated multiplier already hits its 32-bit precision thus any subsequent error calculations make no sense since the real error will be higher anyway).
Ah, the epsilon. Nice. Not really sure I've got its concept entirely, I'll investigate more.
Thanks, again!!!

Anyway...
The worst double pitch shift error seems to be 7.1054273576e-15, so I guess its really irrelevant for pitch here.
Also the float version though: 2.83654887312e-06 is waaaay lower than 0.1 cent (ie. 1/12000 octave), as suggested time ago by mystran.

So now its about performance 8)

I'll do some test comparing the SIMD versions of Pade and the one suggested by 2DaT (which, unfortunately, I don't have the double version; I'll try only float right now).

Go experimenting...

Post

Out of curious: what if in between this pitch range (-48/48) I add modulation to it? :)

I mean: the snaps wouldn't always be ~0.1, but whatever double (or float, if I use single precision) the system (DAW or Envelope/LFO) will confer to it (noteNorm + mod).

Will those approxs being "unstable"?
Or introduce even more drift error for subtle change in the input (note) range?

Tried to test "every" possible combination:

Code: Select all

double pitch = inc / (double)step; // -48.0 exact value (both float and double)
double end = max / (double)step; // 48.0 exact value (both float and double)

long long counter = 0;
while (pitch <= end) {
	pitch = std::nextafter(pitch, end);
	
	double refValue = pitch * ln2per12;
	
	... /analysys

	counter++;
}
But obviously it explodes: tooooo many values to analize (fp is pretty wide, yes hehe).

How would you test it?

Post

Nowhk wrote: Mon Feb 04, 2019 8:44 am Out of curious: what if in between this pitch range (-48/48) I add modulation to it? :)

I mean: the snaps wouldn't always be ~0.1, but whatever double (or float, if I use single precision) the system (DAW or Envelope/LFO) will confer to it (noteNorm + mod).

Will those approxs being "unstable"?
Or introduce even more drift error for subtle change in the input (note) range?
I'm not sure what are you trying to describe here, but for audio applications you want a continuous approximation error. Sudden jumps in pitch are usually more audible. Also, discontinuities may degrade convergence of a Newton method.
Good thing is that accurate approximations are more "continuous", because the function itself is continuous.

However, no matter how hard you try, you can not get absolute continuity (and accuracy), and anything above 64bit is going to be orders of magnitude slower than native types.

But does it matter?

Even if you generate an ideal signal, there are:
DAW mixer.
System mixer.
Sound drivers (that can do a format conversion and resampling).
Audio codec/sound card jitter, noise, intermodulation. (Those are quite high on consumer devices).
Amplifier noise, intermodulation.
Speaker response, non-linearity...
:shock:
In the end, all that matters is faith in your equipment and your personal opinion.

My personal opinion is that pitch accuracy for the oscillators is the least of your worries.

Post

2DaT wrote: Mon Feb 04, 2019 12:42 pm I'm not sure what are you trying to describe here, but for audio applications you want a continuous approximation error. Sudden jumps in pitch are usually more audible. Also, discontinuities may degrade convergence of a Newton method.
Good thing is that accurate approximations are more "continuous", because the function itself is continuous.
I meant...

Till now I've evalutate those approx exps functions only on 961 distinct values:

Code: Select all

int min = -480;
int max = 480;
int step = 10;
int range = max - min;
int numSteps = range + 1;

int inc = min;

// 961 steps: -48.0 to 48.0, stepping as 0.1
for (int i = 0; i < numSteps; i++, inc++) {
	double note = inc / (double)step;
	
	..
}
But in a real scenario, I've not only 961 distinct values, because in between I also modulate it.

In this way: got one of 961s distinct param value => normalize => apply mod (clamp(paramNorm + mod) - note it can take every possible values [0.0, 1.0]) => denormalize => exp(result * ln2per12).

Thus, I wouldn't like to get "sudden jumps", but have an approx exp that won't drift too much in between those 961 values.

That's the question: is it "stable"?
How would I test it? And how much this drift will increment compared to the results I did on only 961 distinct values?

2DaT wrote: Mon Feb 04, 2019 12:42 pm My personal opinion is that pitch accuracy for the oscillators is the least of your worries.
Of course :) But I'm not worried: I'm simple trying to learn those stuff 8)

Post

2DaT wrote: Mon Feb 04, 2019 12:42 pm Also, discontinuities may degrade convergence of a Newton method.
To get ideal quadratic convergence with Newton, you need your approximation (and obviously the function you are approximating) to be C2 to a decent precision and since Newton is quite sensitive (at least in the sense that your convergence tends to slow down) to errors in the derivatives, you're probably better of using the analytic derivative of the approximation itself (ie. you treat the approximation as the actual function for the purpose of Newton) if the approximation isn't quite good enough otherwise; I think it was Urs(?) that suggested this in one of the filter threads of the past.

That said, for anything other than Newton, about 6 good decimal places is probably enough if you're C0 continuous (and maybe a few more if you're not).

edit: another example where precision matters (although in this case for imaginary exponents) that I can think of would be the twiddle factors of large FFTs, since here the accuracy of the whole algorithm relies on these and if your basis gets too far from orthogonal, your pretty much screwed... but again it's bit of a special case, and elsewhere your sin() approx. is probably fine if it's good to about 6 decimal places :)

Post

The special case of exp() with Newton in general is a bit funky, because if your exp() is accurate enough you get the derivative for free. So in this case, you can basically spend twice the cycles for the same cosst, if the alternative is to separately compute the derivative for the approximation.

Post

Nowhk wrote: Mon Feb 04, 2019 1:00 pm But in a real scenario, I've not only 961 distinct values, because in between I also modulate it.

In this way: got one of 961s distinct param value => normalize => apply mod (clamp(paramNorm + mod) - note it can take every possible values [0.0, 1.0]) => denormalize => exp(result * ln2per12).

Thus, I wouldn't like to get "sudden jumps", but have an approx exp that won't drift too much in between those 961 values.

That's the question: is it "stable"?
How would I test it? And how much this drift will increment compared to the results I did on only 961 distinct values?
It's a hard question.

If you analyze approximation as a black box, there is no way of guaranteeing the error, except the exhaustive testing. For single precision it is easily doable (though takes a bit of time).

The function can be decomposed into parts with predictable error (range reduction, polynomial approximation, polynomial evaluation). Thorough error analysis can provide an upper bound on a total error.

For example, in exp_mp2 I did an exhaustive testing to bound the error of a polynomial.

Code: Select all

 [-0.5,0.5] 2^x approx polynomial ~ 2.4 ulp
Evaluation error is around ~1 ulp.
Range reduction is around 64ulps, but that doesn't matter because we rarely operate on error-free quantities.

Also, even if exhaustive testing is not an option, usually, error does not show an erratic behavior. Under some reasonable assumptions error will be continuous to a degree.

Post Reply

Return to “DSP and Plugin Development”