First steps on Vectorizing Audio Plugins: which Instruction Set do you use in 2018?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

2DaT wrote: Mon Feb 04, 2019 2:43 pm For example, in exp_mp2 I did an exhaustive testing to bound the error of a polynomial.
Are you saying did you try your exp_mp2() approx function on every float values between [-0.5, 0.5]?

Code: Select all

double start = -5.0;
double end = 5.0;

while (start < end) {
	// vectorize i:4 start values on m128 vector v_value	
	
	// approx exp
	m128 result = exp_mp2(v_value);

	// next float
	start = std::nextafter(start, end);
}
:o How much time did that test takes?

Post

Nowhk wrote: Sat Feb 09, 2019 10:24 am :o How much time did that test takes?
The single precision mantissa is only 23 bits (and 1 bit for sign), so that's only 2^24 = ~16.8 million distinct values, which is not a whole lot.

Post

mystran wrote: Sat Feb 09, 2019 12:29 pm
Nowhk wrote: Sat Feb 09, 2019 10:24 am :o How much time did that test takes?
The single precision mantissa is only 23 bits (and 1 bit for sign), so that's only 2^24 = ~16.8 million distinct values, which is not a whole lot.
16.8 million? 16.800.000?

Uhm, isn't 2^54 = 18.014.398.509.481.984 ~ 18.014 billions (in old British English)?
Anyway yes, I've used nextafter instead of nextafterf.
Not a whole lot:

Code: Select all

	float start = -0.5; // -0.5 exact value (both float and double)
	float end = 0.5;    // +0.5 exact value (both float and double)
	long long counter = 0;
	while (start < end) {
		start = std::nextafterf(start, end);
		counter++;
	}
	std::cout << "num values: " << counter << std::endl;
It tooks <1 minuts, and its "only" 2.113.929.217 distinct values: faster to check.
I'll do some tests...
Last edited by Nowhk on Mon Feb 11, 2019 8:46 am, edited 1 time in total.

Post

Nowhk wrote: Sat Feb 09, 2019 1:29 pm
mystran wrote: Sat Feb 09, 2019 12:29 pm
Nowhk wrote: Sat Feb 09, 2019 10:24 am :o How much time did that test takes?
The single precision mantissa is only 23 bits (and 1 bit for sign), so that's only 2^24 = ~16.8 million distinct values, which is not a whole lot.
16.8 million? 16.800.000?

Uhm, isn't 2^54 = 18.014.398.509.481.984 ~ 18.014 billions (in old British English)?
2^54 is indeed around 18*10^15, which is 18 quadrillion in modern English (or 18 billiard in old English and other languages like Finnish using the "long scale")... but 2^54 is double precision and the function was in single precision.

There might actually be a bit more values to test if you actually test all the small exponents as well, but even then we're only really taking about 2^31 values or so, which is not a huge problem (at least if you skip the denormals, which might on some CPUs take a long time).

Testing every value for double precision would take a whole lot longer obviously (might want to rent some computation time for that, I guess), but then again for computing accurate error estimates for doubles, you would really want to use extended precision.

Post

mystran wrote: Sat Feb 09, 2019 4:08 pm There might actually be a bit more values to test if you actually test all the small exponents as well, but even then we're only really taking about 2^31 values or so, which is not a huge problem (at least if you skip the denormals, which might on some CPUs take a long time).
Ok, I think I've got it.
Yes, not too much values to test (2113929217):

Code: Select all

float start = -0.5f;
float end = 0.5f;
double maxPitchShift = 0.0;

while (start <= end) {
	double refValue = start;

	// ref
	double refFreqMul = exp(refValue);

	// approx
	__m128 v_float = Exp_Remez_2DaT_SSE2(_mm_set1_ps((float)refValue)); 
	double apxFreqMul = v_float.m128_f32[0];

	// back to pitch
	double pitchRef = log(refFreqMul) / ln2per12;
	double pitchApx = log(apxFreqMul) / ln2per12;

	// error
	double error = fabs(pitchRef - pitchApx);
	if (error > maxPitchShift) {
		maxPitchShift = error;
	}

	start = std::nextafterf(start, end + 1.0f);
}
std::cout << "pitch shift: " << maxPitchShift << std::endl;
The max pitch shift is 4.34936839788e-06 considering only the floats between -0.5 and 0.5.

But I'm still comparing approx values with the ref ones in float, not double!
Not really sure its correct uhm...

Post

[pedantic/maniacal mode on]

After some further reflections, I would say that the test I did above will only show the error (for each available float value between [-0.5 and 0.5]) of the approx function itself, compared to the "ideal"/scalar C++ exp() function.

In fact, it won't consider at all that if I have modulation with values calculated in double (param + mod right now are double), since I will approx using float version of the same, it already introduce a conversion error from double to float (and many double values will convert to the same float value). So the error is multiplied.

However: if use the float (or double) version of the same approx function (the 2Dat for example), for fixed (desired) values (i.e. 961 in the test case) it won't makes any human perceptible differences!! It simple won't be noticeable at all, even if I introduce an abs pitch error of the order of e-16 (dobule) or e-6 (float). Both would be ok and works nice!

But...
Is this the same for modulation? i.e. the values in between two desidered value?
I would say that it might be some "edge cases" where, using the same approx function, the double implementation instead of the float one could introduce some noticeable differences. (i.e. at the "border" of some pitch mul value where an e-6 will fall to another perceived tone).

I mean: I believe hear is "stepping"... so above some threshold, an introduced error (both e-16 or e-6) could cross a line where another tone is perceived. With desidered values, this threshold is far away, so that an e-16 or e-6 errors won't be noticeable at all!
But I ask to you: is it true also for modulation? Where border values can be picked?

Or in fact, comparing a modulation using float instead double (or double instead hypotetic "real" infinite values) could in fact play differents for humans?

[pedantic/maniacal mode off]

Post

mystran wrote: Fri Jan 18, 2019 2:17 am In my experience, if you are only concerned about is tuning, then having an accuracy of about 0.1 cent (ie. 1/12000 octave) or so is typically more than enough. That's roughly about half Hertz at 10kHz and typically roughly on the order of magnitude of what you would hope to get from a typical fine-tuning knob on a synth (ie. 1/1000 of a semitone).
I've noticed that playing a signal at some freq +- 1 semitones really make them imperceptible to me.
So of course a 0.1 is more than enough.

Than, I remember about intermodulation distortion; so I've place two signal playing together with the same note (C1 in my example), with one of them at +0,1 cent (for the test, I've used Sytrus, which I consider a nice synth).

Here's the result after 2 minutes:

Image
Isn't this a problem, mystran? 2 minutes are not so much in a musical context...

(also, probably, in the order of 1e-6 error, it'll place this far away from 2 minutes, but didn't test yet how many minutes it takes to introduce "noticeable variations").

What's your opinion about this?

Post

Two sine waves with differing frequencies are supposed to interfere with each other like your example shows...

Post

Nowhk wrote: Sun Mar 10, 2019 11:29 am
mystran wrote: Fri Jan 18, 2019 2:17 am In my experience, if you are only concerned about is tuning, then having an accuracy of about 0.1 cent (ie. 1/12000 octave) or so is typically more than enough. That's roughly about half Hertz at 10kHz and typically roughly on the order of magnitude of what you would hope to get from a typical fine-tuning knob on a synth (ie. 1/1000 of a semitone).
I've noticed that playing a signal at some freq +- 1 semitones really make them imperceptible to me.
So of course a 0.1 is more than enough.
I assume this is typo and you're really taking about cents (=1/100 semitone) rather than semitones, because if you can't hear a difference of 1 semitone (at least away from the extremes of the hearing range), that's a pretty serious case of amusia.

That said, two sines at differing frequencies are support to beat against each other. The less the frequencies differ, the slower the beating. If you increase the detuning slightly, you'll see what's going on.

Post

Nowhk
And just in case this is not what the "intermodulation distortion" really is. What you'll see there is (as already explained by mystran): https://en.wikipedia.org/wiki/Beat_(acoustics).

Post

mystran wrote: Sun Mar 10, 2019 2:55 pm I assume this is typo and you're really taking about cents (=1/100 semitone) rather than semitones, because if you can't hear a difference of 1 semitone (at least away from the extremes of the hearing range), that's a pretty serious case of amusia.
Dohhh :dog:
Of course I meant cent, not a semitone hehe, sorry!
camsr wrote: Sun Mar 10, 2019 2:47 pm Two sine waves with differing frequencies are supposed to interfere with each other like your example shows...
mystran wrote: Sun Mar 10, 2019 2:55 pm That said, two sines at differing frequencies are support to beat against each other. The less the frequencies differ, the slower the beating. If you increase the detuning slightly, you'll see what's going on.
Of course "beat" is supposed to be introduced if you are looking to it (i.e. play two signal with different freqs).

But what if the difference in freq between two signals is inherited by the approx functions that will drift the final result?

Let say I've an approx function which will introduce an error of 0.09 (so, < 0.1).
Once I play the two signals (to sine waves for example), one at C2, the other at C3 (i.e. 1 octave up), I would expect a "clear" sum of each other, no "beat" at all.

But if the approx exp function will drift C2 (or C3; or both... causing a delta of 0.09), I will get a "beat" effect as well, between two signals that are supposed to play without this effect.

Am I wrong?
Isn't this a problem?

(again, not sure how many minutes it would takes the 2DaT approx function, which error is around 1e-6; I'm just reflecting considering the threshold expressed by mystran, < 0.1).

Post

Nowhk wrote: Sun Mar 10, 2019 7:29 pm Let say I've an approx function which will introduce an error of 0.09 (so, < 0.1).
Once I play the two signals (to sine waves for example), one at C2, the other at C3 (i.e. 1 octave up), I would expect a "clear" sum of each other, no "beat" at all.
For the 2^x based approximations here (and assuming you're skipping the pointless conversion to base-e and back), going up one octave only bumps the floating point exponent by one, so the relative error is exactly the same. Even then, two oscillators will eventually drift apart, because the phase-accumulation itself is in-exact.

If you want to keep two sines (eg. additive partials) exactly in sync, you really should compute them from the same phase accumulator... but really for the most part it's not usually a huge concern in practice.

ps. You really need to get out of the mind-set where you expect to be able to get your errors so small that you can pretend your computations are exact. It just doesn't work like that. It's always going to be in-exact.

Post

I will get a "beat" effect

You will, but the tremolo/chorus effect won't be actually audible as long as the detune is small (roughly less than a few cents).
Recall that it's also never possible to get a perfect tune with either analog electronic synths or acoustic instruments. And that's not a problem - we've survived. In fact a tiny detune is often encouraged to get more rich sound.

Post

mystran wrote: Sun Mar 10, 2019 7:39 pm Even then, two oscillators will eventually drift apart, because the phase-accumulation itself is in-exact.
Of course, but I think the order of magnitude of this is around 1e-15 (or in any cases very lower than 1e-6).

mystran wrote: Sun Mar 10, 2019 7:39 pm For the 2^x based approximations here (and assuming you're skipping the pointless conversion to base-e and back), going up one octave only bumps the floating point exponent by one, so the relative error is exactly the same.
Ok, apart the pointless conversion to base-e and back (which is fp math errors, not "so much added to this operation", roughly similar to phase acc error I would say), for what I see, in the range [-48.0, 48.0] with step 0.1, I got the max error (for both relative or pitch shift with base-e and back) at pitch -36.6.

At that pitch, relative error (using the 2DaT approx exp function) is 7.105427358e-15, while pitch shift (i.e. abs(refPitchShift - log(approxFreqMul) / ln2per12)) is 5.947551934e-06.

Now, if I set one octave up (i.e. -36.6 + 12 = -24,6) I see that both the relative error and the pitch shift for -24.6 is 0.0: not really the same relative error :ud:
Not sure what do you mean with "the relative error is exactly the same".

In this case, I would say that playing a -36.6 and a -24.6 sine waves will introduce (apart the phase accumulation error) an error of ~5.947551934e-06 (more or less, because it could contain the already cited base-e and back error also); however, it would be some error around 1e-06 error.

1e-06 error, if I'm not wrong, is about 0.0001 cent (-36.6 pitch + 1e-06 error => -3660 cents + 1e-04 error => error of 0.0001 cent).

Now I'll try to play with Sytrus and see after "how many times" beat will occurs with this kind of pitch error magnitude. I believe in a very long time :D

mystran wrote: Sun Mar 10, 2019 7:39 pm ps. You really need to get out of the mind-set where you expect to be able to get your errors so small that you can pretend your computations are exact. It just doesn't work like that. It's always going to be in-exact.
You are right! That's the whole point. In fact I'm undertanding that I will never be perfect.
But at least, I'm trying to understand until which limits I can be satisfied. I wouldn't say 0.1 cent is ok, that's my doubt, follow your words.

If the approx exp function introduce a 0.1 cent of error on setting the pitch, as shown above, it will introduce noticeable "beat", which is not good.

Max M. wrote: Sun Mar 10, 2019 7:52 pm You will, but the tremolo/chorus effect won't be actually audible as long as the detune is small (roughly less than a few cents).
Uhm? Two sine waves playing together with a detune of just 1 cent each other is totally audible :o
Here's ten seconds recording:

Image

Or what do you mean?

Post

Nowhk
Two sine waves playing together with a detune of just 1 cent each other is totally audible

With "a few cents" I was targeting more on your example of detuning of sines of different octaves (and/or harmonically rich sounds of a real world if it's about the same note "unison"). Obviously for pure sine waves the beat frequency of 1 cent detune (Fbeat = F*1.00057778950655 - F) is audible if F is high enough (like in your example above).
OK, so let's get pedantic and stick to the sine waves of the same note but then also use the actual errors values of your concern (e.g. < 1/1000 of a cent), did you estimate what would be the beat frequency? Please do: fbeat = (f * 2^(detune_in_cents/1200)) - f.

Post Reply

Return to “DSP and Plugin Development”