Tutorial: BLEPs (using PolyBLEPs but extensible)
- KVRAF
- 12554 posts since 7 Dec, 2004
While sinc() has no antiderivative that isn't true for approximations of limited order. The consequence of using an approximation is that you limit the maximum order of derivative or antiderivative available.
http://mathworld.wolfram.com/SincFunction.html
There is plenty of information about this function since it's central to a huge range of fields. When it comes to its application to anti-aliasing I believe you'll find the implementation itself is a lot more complex than the underlying math.
Since these functions are implemented in many mathematics tools you could use them rather than trying to implement such things yourself.
http://mathworld.wolfram.com/SincFunction.html
There is plenty of information about this function since it's central to a huge range of fields. When it comes to its application to anti-aliasing I believe you'll find the implementation itself is a lot more complex than the underlying math.
Since these functions are implemented in many mathematics tools you could use them rather than trying to implement such things yourself.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
-
- KVRer
- 4 posts since 9 Jan, 2018
Hi all!
Last several weeks I experimented with BLEP tables. (I created this tables from "pure" sinc-function and using some filter-design algorithms - least squares and parks mcclellan). And my oscillator with this tables works very well (~-120 dB)!. But when I try to add in my oscillator lerp interpolation to improve blep, I always get a frequency-dependent DC offset (~-85dB at C9).
Is this norm?
Thanks!
Last several weeks I experimented with BLEP tables. (I created this tables from "pure" sinc-function and using some filter-design algorithms - least squares and parks mcclellan). And my oscillator with this tables works very well (~-120 dB)!. But when I try to add in my oscillator lerp interpolation to improve blep, I always get a frequency-dependent DC offset (~-85dB at C9).
Is this norm?
Thanks!
- KVRAF
- 12554 posts since 7 Dec, 2004
If you use anything other than linear phase or you pre-integrate the impulses you'll end up with an offset. (edit: clarification) This is due to error from both the initial quantization (from discretization of the function into the table) and the interpolation you use to find fractional values between those discrete source values.
What you can do is measure the table's offset and subtract that offset when inserting impulses into the output buffer. The scaling is exactly equal to the spacing between impulses which correlates directly with phase delta.
Yes, this is normal. You should be able to implement direct computation of the output values (without discretizing the function into a table) which should eliminate the resulting offset, proving the source of the offset is related to the discretization itself.
What you can do is measure the table's offset and subtract that offset when inserting impulses into the output buffer. The scaling is exactly equal to the spacing between impulses which correlates directly with phase delta.
Yes, this is normal. You should be able to implement direct computation of the output values (without discretizing the function into a table) which should eliminate the resulting offset, proving the source of the offset is related to the discretization itself.
Last edited by aciddose on Mon Dec 03, 2018 3:59 pm, edited 1 time in total.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
- KVRAF
- 12554 posts since 7 Dec, 2004
Here is an approximate method to calculate the offset of the table (this is a dumb brute-force integration and it's likely possible to improve upon it.)
Code: Select all
void impulse_t::compute_offset()
{
// T = variables (double?)
// TB = table data (float?)
T sum = 0.0;
const TB *p = data;
const int simulated_cycles = 4096;
const int il = zero_crossings*oversamples*4;
for (int i = 0; i < samples; i++) {
int index = i * step_size;
float phase = 0.0;
const T delta = 1.0 / T(simulated_cycles);
// simulated rate = nyquist
const T simulated_rate = 0.5 - delta;
for (int j = 0; j < simulated_cycles; j++) {
while (phase < 1.0) {
phase += simulated_rate;
}
phase -= 1.0;
const T phase_fraction = phase / simulated_rate;
const T fraction = phase_fraction * step_size;
const int impulse_offset = int(truncate(fraction));
const T x = interpolate<T, TB>(p + index + impulse_offset, fraction - impulse_offset);
sum += x * delta;
}
}
offset = sum;
}
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.
- KVRAF
- Topic Starter
- 7868 posts since 12 Feb, 2006 from Helsinki, Finland
Make sure that you are not trying to interpolate over the discontinuity that results if you subtract the trivial step from the tables you store; the trivial step must be subtracted from the final interpolated result, otherwise you'll end up with more problems than just some DC offset (although it can still be quite subtle if your kernels are long enough).
If this is the problem, the most obvious solution (that doesn't add much overhead) is to store the integrated kernel as-is (ie. without subtracting the step) and then split the runtime BLEP addition loop into two halves (to avoid putting a branch inside the loop), where the first half copies interpolated data, second half does the same but also subtracts the unit step.
If this is the problem, the most obvious solution (that doesn't add much overhead) is to store the integrated kernel as-is (ie. without subtracting the step) and then split the runtime BLEP addition loop into two halves (to avoid putting a branch inside the loop), where the first half copies interpolated data, second half does the same but also subtracts the unit step.
- KVRist
- 35 posts since 6 Apr, 2004 from Denmark
This is nice.
I'm trying to extend the original coding example to be used with wavetables (not just saws or squares).
I cannot wrap my head around what the parametervariable t represents... is it the current phase "position" mapped to be between 0 and 1?
The polyBlep is dependent on current playback frequency which it gets from the dt parameter ?
And then you apply the amount of correction to the current sample to be corrected based on the current phase-position ?
But that means that if you are at the start of the phase (t=0) no correction can ever be applied.
Maybe my actual question is: what parameters is the polynomial dependent upon?
I should add that I have discontinuity detection on my (singlecycle) wavetables in place, so I already know when I want to polyBLEP and when I do not. I just need to understand what to feed the PolyBLEP function.
I'm trying to extend the original coding example to be used with wavetables (not just saws or squares).
I cannot wrap my head around what the parametervariable t represents... is it the current phase "position" mapped to be between 0 and 1?
The polyBlep is dependent on current playback frequency which it gets from the dt parameter ?
And then you apply the amount of correction to the current sample to be corrected based on the current phase-position ?
But that means that if you are at the start of the phase (t=0) no correction can ever be applied.
Maybe my actual question is: what parameters is the polynomial dependent upon?
I should add that I have discontinuity detection on my (singlecycle) wavetables in place, so I already know when I want to polyBLEP and when I do not. I just need to understand what to feed the PolyBLEP function.
Last edited by cjohs on Fri Jul 17, 2020 1:07 pm, edited 1 time in total.
- KVRAF
- Topic Starter
- 7868 posts since 12 Feb, 2006 from Helsinki, Finland
The sub-sample offset of the discontinuity (ie. where [0,1] does it occur within the sample).
- KVRist
- 35 posts since 6 Apr, 2004 from Denmark
So if I understand correctly the polynomial "runs in paralellel" with the wavetable-contents from first sample to last sample in the wavetable. Then if I have a discontinuity the sample correction is taken from the position on the polynomial.
To properly understand I've attached the pictures below which displays the polynomial, and three example wavetables (3 square waves with different pulsewidth). The resulting correction from the polynomial will be widely different: for the first square the correction will be about 0.08, for the second it will about 0.001 and for the last one it will be about 0.3
To properly understand I've attached the pictures below which displays the polynomial, and three example wavetables (3 square waves with different pulsewidth). The resulting correction from the polynomial will be widely different: for the first square the correction will be about 0.08, for the second it will about 0.001 and for the last one it will be about 0.3
You do not have the required permissions to view the files attached to this post.
-
- KVRian
- 1273 posts since 9 Jan, 2006
A ployblep should be centred on a discontinuity. Depending on the implementation it may be just a couple of samples in length, but it can be longer.
Last edited by matt42 on Thu Jul 23, 2020 11:17 am, edited 2 times in total.
-
- KVRian
- 832 posts since 21 Feb, 2006 from FI
Q: Is it meaningful which form the correction curve has? As for an example if one uses linear "curve" (Desmos plot)
- KVRist
- 35 posts since 6 Apr, 2004 from Denmark
But in the implementation in this tutorial the polyBlep function takes phase as the only parameter, then it can't be centered on the discontinuity... (see @mystran's previous post - he wrote the tutorial...)
-
- KVRian
- 1273 posts since 9 Jan, 2006
Combine your residuals with a step function and you should notice that the linear version has much more visually obvious discontinuities. I haven't checked the spectra, but I'd guess it will alias a lot more. https://www.desmos.com/calculator/t1dguzczh7juha_p wrote: ↑Thu Jul 23, 2020 11:27 am Q: Is it meaningful which form the correction curve has? As for an example if one uses linear "curve" (Desmos plot)
Last edited by matt42 on Thu Jul 23, 2020 5:55 pm, edited 1 time in total.
-
- KVRian
- 1273 posts since 9 Jan, 2006
I didn't have time to look at mystran's post properly, but I would guess the t parameter refers to the sub sample position of the discontinuity.
- KVRist
- 35 posts since 6 Apr, 2004 from Denmark
Exactly - I think we actually agree just using different terminologymatt42 wrote: ↑Thu Jul 23, 2020 4:47 pmI didn't have time to look at mystran's post properly, but I would guess the t parameter refers to the sub sample position of the discontinuity.
yes the t input to the polyBlep function yields the graph I plotted above and is the subsample position of the discontinuity ...