My b^x approximation

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

juha_p wrote: Fri Feb 01, 2019 10:18 pm
mystran wrote: Fri Feb 01, 2019 10:12 pm ...

As luck would have it, the combination works almost like magic much better than expected. As to WHY this happens, I honestly don't have the slightest clue.
Isn't that "func(sqrt(x))/sqrt(x)" related to the minimax thingy?
As I said... I don't have the slightest clue. :D

Post

The Taylor series for tanh consists only of odd terms, tanh(x)/x consists only of even terms. It makes more sense to approximate it only in even terms, because it is twice as cheap to evaluate a polynomial consisting only of even terms.

For Pade approximations, this is even better, because tanh has periodic singularities (i*pi/2, i*3pi/2, i*5pi/2) on a complex plane. Fortunately for us, they come in the form of complex conjugate pairs, which, after multiplication, guarantees us a polynomial consisting of only even terms.

Check out the 3 smallest zeros of a Pade denominator. :)
https://www.wolframalpha.com/input/?i=z ... F21+%2B+1)

Code: Select all

x ≈ 1.57080i
x ≈ 4.71239i
x ≈ 7.85880i

Post

mystran wrote: Fri Feb 01, 2019 10:16 pm
juha_p wrote: Fri Feb 01, 2019 10:06 pm
mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?
Forgot to mention earlier that you can calculate this using WolframAlpha !
Oh, I tried various combinations of orders (more or less by "gut feel") with Maxima first, then asked when I couldn't find one that would give me the same performance.
Looks like there can be "remarkable" differences in results between various Math software as like noted here (Maple vs Mathematica) .

Post

juha_p wrote: Sat Feb 02, 2019 10:37 am
mystran wrote: Fri Feb 01, 2019 10:16 pm
juha_p wrote: Fri Feb 01, 2019 10:06 pm
mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?
Forgot to mention earlier that you can calculate this using WolframAlpha !
Oh, I tried various combinations of orders (more or less by "gut feel") with Maxima first, then asked when I couldn't find one that would give me the same performance.
Looks like there can be "remarkable" differences in results between various Math software as like noted here (Maple vs Mathematica) .
I think you forgot to transform the interval. We are not interested in accuracy of f(x)=tanh(sqrt(x))/sqrt(x) approximation. Only x*f(x*x) error matters . Your error interval transforms to [-sqrt(20);sqrt(20)] for x*f(x*x).

While we are at it, here is one interesting approximation i found on the internet:
https://www.desmos.com/calculator/k3jzkhmoid
It's VERY close to minmax, because of equioscillation behavior.

Post

juha_p wrote: Sat Feb 02, 2019 10:37 am
mystran wrote: Fri Feb 01, 2019 10:16 pm
juha_p wrote: Fri Feb 01, 2019 10:06 pm
mystran wrote: Thu Jan 31, 2019 6:32 pm
I actually can't reproduce this, so I wonder what does your approximation look like exactly?
Forgot to mention earlier that you can calculate this using WolframAlpha !
Oh, I tried various combinations of orders (more or less by "gut feel") with Maxima first, then asked when I couldn't find one that would give me the same performance.
Looks like there can be "remarkable" differences in results between various Math software as like noted here (Maple vs Mathematica) .
I only skimmed the paper quickly, but it seems that it basically comes down to Maple using floating-point computations. As far as I can tell, Maxima (and Mathematica) does these with exact rationals (at least when possible; but that's not an issue here), so that shouldn't be an issue here.

edit: although there's a little "gotcha" here: even if you compute the approximation itself using exact arithmetic, once you translate it into floating-point code (for actual implementation), your error might increase a lot, especially so for high order approximations where both numerator and denominator are monomials.

Post

<deleted because of duplicate>
Last edited by juha_p on Sun Feb 03, 2019 12:03 pm, edited 1 time in total.

Post

2DaT wrote: Sat Feb 02, 2019 11:42 am I think you forgot to transform the interval. We are not interested in accuracy of f(x)=tanh(sqrt(x))/sqrt(x) approximation. Only x*f(x*x) error matters . Your error interval transforms to [-sqrt(20);sqrt(20)] for x*f(x*x).
...
Have to look this.

BTW, did you use Lagrange or Newton for to solve those coeffs in your exp_mp() function?

Post

<duplicate>

Post

juha_p wrote: Sun Feb 03, 2019 12:02 pm
BTW, did you use Lagrange or Newton for to solve those coeffs in your exp_mp() function?
Remez algorithm.
https://en.wikipedia.org/wiki/Remez_algorithm

I use a Newton divided differences to construct a polynomial from nodes. Though it's not very stable for high degree polynomials.

Post

2DaT wrote: Mon Feb 04, 2019 2:56 pm I use a Newton divided differences to construct a polynomial from nodes. Though it's not very stable for high degree polynomials.
Is this a problem with Newton polynomials themselves, or just when they are converted to monomials?

Post

mystran wrote: Mon Feb 04, 2019 3:09 pm
2DaT wrote: Mon Feb 04, 2019 2:56 pm I use a Newton divided differences to construct a polynomial from nodes. Though it's not very stable for high degree polynomials.
Is this a problem with Newton polynomials themselves, or just when they are converted to monomials?
Don't know for sure, but we almost certainly want a monomial form in the end, because it evaluates nicely.

Post

2DaT wrote: Mon Feb 04, 2019 3:27 pm
mystran wrote: Mon Feb 04, 2019 3:09 pm
2DaT wrote: Mon Feb 04, 2019 2:56 pm I use a Newton divided differences to construct a polynomial from nodes. Though it's not very stable for high degree polynomials.
Is this a problem with Newton polynomials themselves, or just when they are converted to monomials?
Don't know for sure, but we almost certainly want a monomial form in the end, because it evaluates nicely.
Well, strictly speaking we probably want Horner form (which is less ops and supposedly more stable as well), but I was mostly wondering if one could improve stability during the actual fitting process by keeping the polynomials in a more stable form until the very end...

I've actually been banging by head to the wall for somewhat related reason lately. As it turns out, one evaluate Lagrange polynomials with degrees in the 3 digit range without any huge problems, but what I'd like to do is integrate those analytically and while that's obviously trivial in monomial form, that strategy just isn't going to work in finite precision. :D

Post

I haven't been working on my test program for awhile, but on the todo list is figuring out a computing problem with delta values. I would like the program to measure the delta of the error between basis function and approximation. However I have not been able to decide how this should be done for a few reasons. One reason is the function domain may already be the entire range of the datatype, and in this case the delta may be invalid because one of the values used in the computation is not representative of the real number line (+/- INF for example, or NaNs). The input domain could be shrank to fit the necessary real numbers required to calculate the delta, but then another problem arises. Is it 'better' to calculate the delta by iterating in the positive direction up the number line, or down? In the case of an exponential function, the deltas on the positive side are much greater than the negative side. And, if the input domain is shrank to accomidate calculating the deltas, how will averages and other accumulating calculations between 0th and 1st order calculations compare? Is there the possibility of using some kind of correction factor to better correlate the results?

Or is this misrepresenting the actual usage of the function? Should I just toss a flag if the domain exception occurs, and just expand the domain used for the delta calculation? There is also the possibility of clamping or extrapolating the necessary values, but would this be misrepresentative as well?

Post

After playing around with approximating various functions (and examining found approximations) I've noticed you can improve the approximation by "extending" the base function in question. Here's an example of approximation of exp(x) (as this thread is about) using coefficients got from sollya's remez() method (degree 5 ):

Code: Select all

float iPowxF(float n) {
	union {
		int i;
		float f;
	} v;
	v.i = ((int)n+127)<<23;
	return v.f;
}
float rem5_expx2(float x, bool q){
	float z = floor(fmaf(x, 0x1.715476p+0, 0x1p-1));
	x -= z * 0x1.62e43p-1;
	int n = (int)z;
	float i = iPowxF(n);
	if(q){ // sqrt(exp(x)) // [-0.5;0.5]
		z =             0x1.11e160b9aa845p-12;
		z = fmaf (z, x, 0x1.5666c1065d974p-9);
		z = fmaf (z, x, 0x1.5554d9989d3b1p-6);
		z = fmaf (z, x, 0x1.ffff32d90820dp-4);
		z = fmaf (z, x, 0x1.0000002715625p-1);
		z = fmaf (z, x, 0x1.0000002d9927cp+0);
		z *= z;
	}
	else{ // exp(x) // [-0.5;0.5]
 		z =           	0x1.145571e09980cp-7;
 		z = fmaf (z, x, 0x1.599f46d20c844p-5);
 		z = fmaf (z, x, 0x1.554d8f919671cp-3);
 		z = fmaf (z, x, 0x1.fff31c9a0d43dp-2);
 		z = fmaf (z, x, 0x1.000009d39af2bp+0);
 		z = fmaf (z, x, 0x1.00000b7714b38p+0);
 	}
	return i * z;
}
Error plot:
remez5_sqrt(exp(x))_vs_exp(x).png
(green line shows error (a/f-1) for f=exp(x) (a)pproximation and the other for f=sqrt(exp(x)) (a^2/exp(x)-1) (a)pproximation).

This might be very wide question for this thread but, what is the theory behind this phenomenon and are there some certain rules, limits or negative side effects in selection of the extentions for base function of approximation to achieve best possible result?
You do not have the required permissions to view the files attached to this post.
Last edited by juha_p on Mon Mar 04, 2019 1:15 pm, edited 3 times in total.

Post

I'm not sure what you're referring to as "extension of the base function", maybe because I didn't read the entire thread, also not sure to which phenomenon you're referring to. However, since your relative error seems to increase towards the right, I would assume that you did the basic Remez with equal absolute error. You can do a equal relative error Remez, for functions passing through 0, e.g. for 2^x-1 by approximatig f(x)/x with an equal absolute error and then multiplying the result by x (that's IIRC what 2DaT did).

Post Reply

Return to “DSP and Plugin Development”