My b^x approximation

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

mystran wrote: Thu Jan 31, 2019 11:57 am Have you tried doing a Pade approximation on t(x)=tanh(sqrt(x))/sqrt(x), such that tanh(x)=x*t(x*x)?
...
Aaah ... tricks, tricks, tricks ... tried this what you suggested and it simplified the approximation down to 4th order function (error below 10^-7 [-20,20]) . Have to try this trick with some other methods aswell.

Post

juha_p wrote: Thu Jan 31, 2019 12:28 pm
mystran wrote: Thu Jan 31, 2019 11:57 am Have you tried doing a Pade approximation on t(x)=tanh(sqrt(x))/sqrt(x), such that tanh(x)=x*t(x*x)?
...
Aaah ... tricks, tricks, tricks ... tried this what you suggested and it simplified the approximation down to 4th order function (error below 10^-7 [-20,20]) . Have to try this trick with some other methods aswell.
The downside of Pade approximations though is that you end up with the expensive division. :(

Post

mystran wrote: Thu Jan 31, 2019 12:34 pm
The downside of Pade approximations though is that you end up with the expensive division. :(
It's not so much a downside when instruction-level parallelism can be utilized. But these rational equations seem to end in division more often than not.

Post

Just for fun here is the exp() approx I discovered calculating tanh(x)

https://www.desmos.com/calculator/8ppn9p9mx4

Post

juha_p wrote: Thu Jan 31, 2019 12:28 pm
mystran wrote: Thu Jan 31, 2019 11:57 am Have you tried doing a Pade approximation on t(x)=tanh(sqrt(x))/sqrt(x), such that tanh(x)=x*t(x*x)?
...
Aaah ... tricks, tricks, tricks ... tried this what you suggested and it simplified the approximation down to 4th order function (error below 10^-7 [-20,20]) . Have to try this trick with some other methods aswell.
I actually can't reproduce this, so I wonder what does your approximation look like exactly?

Having equal orders for numerator and denominator seems to be the best case(?), but with orders 8/8, I still get error of 10^-6 at +/-20 against real tanh (as computed by Maxima).

Post

mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?

Having equal orders for numerator and denominator seems to be the best case(?), but with orders 8/8, I still get error of 10^-6 at +/-20 against real tanh (as computed by Maxima).
Quickly looked the calculus and looks like you're right ... 4th order (4,4) has max error ~13/100000 @ -20.0. Here's 5th order function:

Code: Select all

r[x] = (13749310575+1964187225 x+64324260 x^2+675675 x^3+2145 x^4+x^5)  /  (13749310575+6547290750 x+413513100 x^2+7567560 x^3+45045 x^4+66 x^5)
https://www.desmos.com/calculator/qml1tzsniq
Last edited by juha_p on Sun Feb 03, 2019 11:59 am, edited 4 times in total.

Post

mystran wrote: Wed Jan 30, 2019 9:16 pm
2DaT wrote: Wed Jan 30, 2019 7:47 pm
mystran wrote: Wed Jan 30, 2019 4:15 pm
Sometimes it might actually be nice to get zeroes for underflows even if you don't care for the denormals. As an example, tanh(x)=(exp(2*x)-1)/(exp(2*x)+1) is normally safe for large negative values (and one can handle the positive values by symmetry to avoid overflowing exp()), it just saturates to -1/1 as exp(2*x) becomes zero.

I suppose in most audio situations one would approximate tanh() separately, but there might be a few other similar situations where underflow is actually perfectly fine as long as exp() correctly returns zero.
Well, you don't need a full range exp for a good tanh either. I think for all x > 15, tanh(x) is exactly 1 in floating point arithmetic.
Python says:

>>> math.tanh(19.)
0.9999999999999999
Yeah, cutoffs are: (i hope i got it right).

Code: Select all

Double:19.0615
Float:9.01091
Code to calcualte the limits:

Code: Select all

int main()
{
	using namespace boost::multiprecision;
	typedef boost::multiprecision::number<boost::multiprecision::cpp_bin_float<1024, boost::multiprecision::digit_base_2>,boost::multiprecision::et_off> bigfloat;
	bigfloat ff = nexttoward(1.0, 0);
	ff = (1 + ff) / 2;
	std::cout <<"Double:"<< atanh(ff);
	
	ff = nexttoward(1.0f, 0);
	ff = (1 + ff) / 2;
	std::cout <<"Float:"<< atanh(ff);
}
Actually there is no sense in measuring the error outside those limits. Even less so when your approximation is not meant to be last-bit accurate.

Post

There is a zero in the denominator at around -2.467, might be a problem at that exact value?

Post

camsr wrote: Thu Jan 31, 2019 10:45 pm
There is a zero in the denominator at around -2.467, might be a problem at that exact value?
This ... issue?

Post

camsr wrote: Thu Jan 31, 2019 10:45 pm
There is a zero in the denominator at around -2.467, might be a problem at that exact value?
Keep in mind that the full aproximation is x*f(x*x) so the argument is always positive (because it is squared) and in fact f(x) on it's own approximates a function that isn't even defined for negative values.

Post

mystran wrote: Fri Feb 01, 2019 7:24 pm
camsr wrote: Thu Jan 31, 2019 10:45 pm
There is a zero in the denominator at around -2.467, might be a problem at that exact value?
Keep in mind that the full aproximation is x*f(x*x) so the argument is always positive (because it is squared) and in fact f(x) on it's own approximates a function that isn't even defined for negative values.
I see that now. What is the purpose of x*f(x^2)? Is it some kind of modification of the function order?

Post

mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?
Forgot to mention earlier that you can calculate this using WolframAlpha !

Post

camsr wrote: Fri Feb 01, 2019 8:53 pm
mystran wrote: Fri Feb 01, 2019 7:24 pm
camsr wrote: Thu Jan 31, 2019 10:45 pm
There is a zero in the denominator at around -2.467, might be a problem at that exact value?
Keep in mind that the full aproximation is x*f(x*x) so the argument is always positive (because it is squared) and in fact f(x) on it's own approximates a function that isn't even defined for negative values.
I see that now. What is the purpose of x*f(x^2)? Is it some kind of modification of the function order?
The story is such that years ago I was trying to soft-clip the length of a 2D vector (eg. [left,right] or [mid,side] vector). For this I wanted to compute g=tanh(x)/x where x=sqrt(x0*x0+x1*x1) such that you could then multiply x0 and x1 by g. This way if you have equal power panned mono signal, it will get clipped exactly the same no matter where you pan it (and it's kinda cool with real stereo signals too, if you use it inside stereo filters or something). At some point I figured if I was doing Pade approximations anyway, why not try merging the sqrt() in there as well and see what happens (ie. why waste time computing the square root if you can fold it into the approximation).

As luck would have it, the combination works almost like magic much better than expected. As to WHY this happens, I honestly don't have the slightest clue.

Post

juha_p wrote: Fri Feb 01, 2019 10:06 pm
mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?
Forgot to mention earlier that you can calculate this using WolframAlpha !
Oh, I tried various combinations of orders (more or less by "gut feel") with Maxima first, then asked when I couldn't find one that would give me the same performance.

Post

mystran wrote: Fri Feb 01, 2019 10:12 pm ...

As luck would have it, the combination works almost like magic much better than expected. As to WHY this happens, I honestly don't have the slightest clue.
Isn't that "func(sqrt(x))/sqrt(x)" related to the minimax thingy?

Post Reply

Return to “DSP and Plugin Development”