Sine hard sync

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

rigatoni_modular wrote: Sun Nov 12, 2023 2:15 am I’m using the 2nd sync offset scheme you mentioned fortunately. A little confused about the 2nd code sample though - shouldn’t phase_at_sync just be the old phase plus delta? The subtraction there is puzzling.
No, the phase at sync is old phase plus delta scaled by (1-sync_frac), and that can wrap around too. By going backwards from current phase (which is old phase plus delta and a wraparound if it happened) it is possible to compute it with less operations.

Post

2DaT wrote: Sun Nov 12, 2023 12:39 amAlso, under extreme pwm modulation, pulse can back swing (i.e go from triggered back to untriggered, but without phase going through 1), which needs to be handled.
Well, you can implement PWM with a comparator or a latch (that holds high until explicitly reset by the oscillator sync signal). A comparator can backswing if you exceed the hysteresis threshold, a latch can't. A pure linear high-gain amp (ie. no hysteresis) is also possible, but risks glitchy behavior at "slow edges" once the signals contain noise (as they always do in analog).

If you can choose what circuit to model, just model one that uses a latch and you don't need to worry about backswing. I'd argue it's actually the "better" circuit, though the conditions where a comparator and a latch based circuit produce different results are pretty extreme and when you're doing that kind of stuff "better" is arguably quite subjective. The latch approach does not work with a triangle core though, or if you want "thru-zero FM" or something.

ps. As for the actual comparison.. the trigger condition is saw(t)>pwm(t), which we can rewrite as saw(t)-pwm(t)>0, so really when you're solving the crossing point, just subtract previous PWM from the previous phase and current PWM from the current phase in the linear equation and compare against zero.. then it's exactly the same as any other transition.

Post

mystran wrote: Sun Nov 12, 2023 9:38 am If you can choose what circuit to model, just model one that uses a latch and you don't need to worry about backswing. I'd argue it's actually the "better" circuit, though the conditions where a comparator and a latch based circuit produce different results are pretty extreme and when you're doing that kind of stuff "better" is arguably quite subjective.
At least "objectively", I'd argue that the "latch" approach is rather "wrong" from a purely theoretical perspective, because it's highly tied to specific analog circuit details. It also adds an uncalled for irregularity into the results, so unless this sound is being specifically looked for, I'd argue it's wrong. I also wonder what the real analog circuits are doing. I would compare this to unipolar (clipped) vs bipolar (unclipped) FM, the former IIRC doesn't sound too good, even though it saves a lot of analog implementation costs.

Post

Z1202 wrote: Sun Nov 12, 2023 1:49 pm
mystran wrote: Sun Nov 12, 2023 9:38 am If you can choose what circuit to model, just model one that uses a latch and you don't need to worry about backswing. I'd argue it's actually the "better" circuit, though the conditions where a comparator and a latch based circuit produce different results are pretty extreme and when you're doing that kind of stuff "better" is arguably quite subjective.
At least "objectively", I'd argue that the "latch" approach is rather "wrong" from a purely theoretical perspective, because it's highly tied to specific analog circuit details. It also adds an uncalled for irregularity into the results, so unless this sound is being specifically looked for, I'd argue it's wrong. I also wonder what the real analog circuits are doing.
Well, with a noisy signals (and all signals are noisy in analog) your sensible choices are either a comparator with some hysteresis or a latch. Both of these are "wrong" from the mathematical point of view, but without one or another with a low-frequency carrier you risk oscillation around the threshold due to noise even with nominally constant pulse-width.

That said, you could probably make a two oscillator synth where one oscillator used a comparator and another used a latch and I highly doubt anyone would even notice. What people would notice though if you had a design where the "normal" case of LFO PWM modulation caused glitchy oscillations due to noise.

Post

Even if that's true and there is no way in analog around it (like a variable amount of hysteresis), I'd question whether this is the kind of analog artifact we'll be looking to emulate. As for sound differences, it's best to make an experiment, which, among other things, will specifically target the sensitive areas. Don't forget audio-rate PWM.

Post

2DaT wrote: Sun Nov 12, 2023 2:46 am No, the phase at sync is old phase plus delta scaled by (1-sync_frac), and that can wrap around too. By going backwards from current phase (which is old phase plus delta and a wraparound if it happened) it is possible to compute it with less operations.
Oh no... I'm basically slapping my forehead right now. The blep sync implementation I was following (VCV Fundamental VCO, since I'm working on a VCV plugin) uses "phase right now if sync hadn't happened" and "phase right now since sync happened" as the two phases from which to calculate the discontinuity. Now that you mention this, that implementation is incorrect but the artifacts probably won't show up with square or saw because the slope is pretty much always constant.

As I understand it from your post, the correct way is to take "phase at sync before sync" and "phase at sync after sync" (which is 0) which makes a whole lot of sense. I feel stupid for doing it wrong for so long. Thank you.

Post

rigatoni_modular wrote: Sat Nov 11, 2023 8:38 pm ...

I'm pretty sure that my sub-sample offset is correct so is it possible that I have a magnitude or interpolation issue or does this look about right?

AntiAliasingView1.png
Have you compared against coding example from Companion page for “A General Antialiasing Method for Sine Hard Sync” ?

Code: Select all

figure;
fs = 96000
f1 = 5450.2;
f0 = 6845.6;
L = 200;
[y_trivial, y_poly, y_tri, y_spline, y_trig, y_FSM] = sine_hard_sync(f0, f1, fs, L);

plot(y_trivial, 'linewidth', 2)
hold on;
plot(y_poly, 'linewidth', 2)
hold on;
plot(y_tri, 'linewidth', 2)
hold on;
plot(y_spline, 'linewidth', 2)
hold on;
plot(y_trig, 'linewidth', 2)
hold on;
plot(y_FSM, 'linewidth', 2)

legend('y_trival', 'y_poly', 'y_tri', 'y_spline', 'y_trig', 'y_FSM','location', 'southwest')
axis([0 125 -1.5 1.5]);
grid on;
gives:
sinehardsyncexample.png
You do not have the required permissions to view the files attached to this post.

Post

Z1202 wrote: Sun Nov 12, 2023 3:42 pmI'd question whether this is the kind of analog artifact we'll be looking to emulate.
Well, that's a matter of taste. I like stuff to be noisy, not just PWM control signals, but even the saw-core oscillator reset threshold... but I do realize that's probably not for everyone.
Don't forget audio-rate PWM.
Well.. once the PWM "control signal" bandwidth gets high enough, we actually run into another problem, namely treating it as a sampled piece-wise linear signal is not necessarily good anymore and you should really treat it as a continuous-time signal... because at that point you're really trying to compute the result of thresholding a sum of two audio band-width signals and either one of those can produce aliasing just the same.

Post

The mention of PWM is interesting, I haven't read the entire thread for context but have taken interest in using PWM type signals as effects in some way.

To use PWM at the native sampling rate I assume requires bit-depth reduction to reduce overhead of storing and processing a PWM value at an unnecessary bit-depth. Without this reduction, I could just easily for experimentation use a much lower simulated sample rate, with all the problems that entails.

Knowing that PWM is not easily understood as a basic number form like PCM, it still seems interesting what could happen if values are modified in different ways.

Post

Update: great news! after correcting the magnitude of my discontinuity as per 2DaT's messages my 0th order BLEP finally works as expected. Really stoked about this.

Haven't gotten the 1st, 2nd, or 3rd derivative BLEPs working well yet but I'll chalk that up to not calculating the derivatives at the right phase. My module has a quadrature mode where you get additive sines and a 90° phase shifted version of that. Because the phases of different partials will be at different spots when a sync happens it gets a little complicated for the quadrature output.

I'm considering using a lower quality sin approximation for some of these discontinuity calculations since I'm already doing over 300 7th order chebyshev polynomial sin calculations per sample to have normal and quadrature outputs with up to 64 partials and blep.

Post

rigatoni_modular wrote: Wed Nov 15, 2023 6:20 pm ...
I'm considering using a lower quality sin approximation for some of these discontinuity calculations since I'm already doing over 300 7th order chebyshev polynomial sin calculations per sample to have normal and quadrature outputs with up to 64 partials and blep.
That's quite a lot of ops / sample... .

Would it help if you use integer step frequencies (or is it a must to use decimal frequencies)? Integer frequencies produces a 'short' repeating pattern... .
You do not have the required permissions to view the files attached to this post.
Last edited by juha_p on Thu Nov 16, 2023 8:16 am, edited 1 time in total.

Post

juha_p wrote: Wed Nov 15, 2023 9:58 pm That's quite a lot of ops / sample... .

Would it help if you use integer step frequencies (or is it a must to use decimal frequencies)? Integer frequencies produces a repeating pattern... .
Unfortunately frequency needs to be as continuous as possible for modulation purposes. I have a couple tricks up my sleeve though....

Post

Chebyshev approximations have been mentioned in the context of this thread. I'd like to ask a general question about those. I'm not familiar with the details, but IIUC they are "somewhat suboptimal minimax approximations".

In DSP context I can imagine two main, kind of opposite, approaches to piecewise-segment approximation of functions.
  • Matched-derivative approximations, obtained by equating the left and right derivatives (up to some order, determined by the order of the approximating polynomial) at each segment boundary. Notice that we usually don't use the true derivative of the function here, as this would unnecessarily take away half of the degrees of freedom
  • Minimax (Remez) approximations
The latter obviously give the best precision for the function itself, but the derivatives are pretty much off, especially closer to the segment's ends. Therefore with lower approximation precisions it's usually beneficial to use the matched-derivatives approach. Even though the maximum error is larger, the obtained spectra are usually better due to much smoother connections between the segements.

The minimax approximations become superior at high approximation precisions, where the approximation error can be simply seen as a small amount of added noise. Although I would still somewhat question the spectral properties of that added noise, since it can exhibit "harmonic-like" structure with an infinite bandwidth, so it can look as your typical aliasing. But with sufficient precision it probably gets negligible even considering the somewhat uneven spectral distribution.

At any rate I'm not sure what would be the reason to go for the drawbacks of the minimax approach without utilizing its benefits (high precision) in full. I also doubt that staying somewhere midway (Chebyshev approximations) is a good tradeoff between drawbacks and benefits. I'd rather expect that it would exihibit the drawbacks to a similar amount, but just without the full benefits of minimax.

Any thoughts?

Post

Z1202 wrote: Thu Nov 16, 2023 11:51 am The latter obviously give the best precision for the function itself, but the derivatives are pretty much off, especially closer to the segment's ends.
If I'm not mistaken, the optimal minimax polynomial doesn't interpolate the end-points, so in a piece-wise context it's not even C0 continuous unless you end up with a happy accident... so even if you don't bother fixing the derivatives, at least forcing continuity might be a good idea at low orders.

That said, I wonder if it's workable to minimax a derivative (eg. cosine) and then just integrate the polynomial (eg. into a sine-approx) to get one extra order of continuity. I imagine one might need to fudge scaling and the result might not be truly minimax, but it should be close?

[edit: something like a half-period sine/cosine converges really quickly though and half the coefficients are zero due to symmetry anyway, so I don't think you necessarily need to get too fancy with these]

I think the main thing going for Chebychev (truncated Fourier series with a parameter substitution? edit: actually cosine series I guess.. but same thing) is that they are easy to compute and good initial guess for iterative optimization.

That said, I remember reading a paper that argued optimizing for both least-squares and minimax simultanously can give you a result that's surprisingly close to both at the same time. I think it might have been in the context of FIR design, but that's basically polynomials too.

Post

I could not test this njuffa's implementation right now (I've done it couple years ago and remember it being slow on emulated fma) but, he says it's 1.5 ULP in range [0:PI].

Post Reply

Return to “DSP and Plugin Development”