How would you compensate pattern lenght (due to step approximation)?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi all again.

Let say I want to output from my plugin a note of 1 beat lenght, starting at the second beat. Pattern lenght is 4 beats, than loop.

Sample rate 44100. Due to the tempo, I got the step/size:

stepSize = (BPMTempo / 60) / SampleRate;

and at each sample iteration, I sum it.
Thus, I check the position and output midi, or loop, accordly. This would be the pseudo code

Code: Select all

step = 0
foreach sample
	if(step==1)
		output noteOn
	else if(step==2)
		output noteOff
		
	step+=stepSize;
	
	if(step>=4)
		step=0
end
the fact is: when I loop, step will never be exact 4. Due to approx, I lost some pieces of tempo, and the pattern will be slightly shorter. Than, the next note (after the first loop) will always play a bit "earlier" than 6 beat. If for 1-10 loop this will lost only some "samples", after minuts, the difference will be huge.

How would you fix this problem or approx? Manage offset?

Thanks!

Post

Code: Select all

float samplesPerStep =  60 * sampleRate / tempoBPM
int currentSample = 0
float nextBeatBoundary = 0
int step = 0
while (not finished) {
    if (nextBeatBoundary <= currentSample) {
        nextBeatBoundary += samplesPerStep
        step++
        if (step > 4) { step = 1 }
        if (step == 1) {
            noteOn
        } else if (step == 2) {
            noteOff
        }
    }
    currentSample++
}
But there's no need to calculate this, there's a VST function to get time info from the host:
http://ygrabit.steinberg.de/~ygrabit/pu ... eInfo.html
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

BertKoor wrote:

Code: Select all

float samplesPerStep =  60 * sampleRate / tempoBPM
int currentSample = 0
float nextBeatBoundary = 0
int step = 0
while (not finished) {
    if (nextBeatBoundary <= currentSample) {
        nextBeatBoundary += samplesPerStep
        step++
        if (step > 4) { step = 1 }
        if (step == 1) {
            noteOn
        } else if (step == 2) {
            noteOff
        }
    }
    currentSample++
}
But its the same :o If I have tempo 133, nextBeatBoundary will be somethings like 19894,736842105263157894736842105 (which approx at some point using double). I'll lost part of lenght during the time, since I'm working with int sample/pos.
BertKoor wrote: But there's no need to calculate this, there's a VST function to get time info from the host:
http://ygrabit.steinberg.de/~ygrabit/pu ... eInfo.html
Its not cumulative (i.e. if I play a step sequencer instead of arrange, when it loops, the indexes reset).
Also, on the framework I use (IPlug), timeinfo refresh at each buffer, so it's not "step by step" during the iteration of samples.

Post

Nowhk wrote:But its the same :o
No it's not. In my algorythm there's no rounding and only detection of crossing the boundary instead of checking being exactly on the boundary. That should account for your observed drift, right?
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

BertKoor wrote:
Nowhk wrote:But its the same :o
No it's not. In my algorythm there's no rounding and only detection of crossing the boundary instead of checking being exactly on the boundary.
Theres rouding. If I'm not wrong...

Try with Tempo 133. Its somethings like 19894,736842105263157894736842105.
You will loop at 19895°, and compensate the lenght of the (next) pattern with 0,263157894736842105263157895 using nextBeatBoundary += samplesPerStep.

But the noteOn play at 19895°, not at 19894,736842105263157894736842105. So the distance between the first, second and third noteOn are differents (not in line).

Post

This algorithm is accurate within one sample. That's close enough for rock 'n roll (and most other genres as well) and the rounding error should not be cumulative. You could make a correction to perform it the sample before instead of after. But do you have any idea how long it takes for a NoteOn event to be sent through a hardware midi interface?

Still it's better to derive the position within a bar from the time info provided by the host.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

Hi Nowhk

Rounded to the sample is plenty close enough for noteon/off. That is lots closer than midi sequencer tick-based timing, and midi sequencer timing is usually close enough to be transparent if the PPQN (pulses per quarter note) is reasonably big.

For instance, many sequencers have used PPQN = 480. At 125 BPM, 480 PPQN, 1 minute contains 60000 ticks. Each tick has a duration of 1 ms. At a samplerate of 44.1 k, each 1 ms tick has a duration of 44.1 samples. So basically, even neglecting other slop in the sequencer, you could say that a 125 BPM, 480 PPQN MIDI sequencer is rounded by 44.1 samples per tick.

At 250 BPM, each tick is rounded by 22.05 samples. At 62.5 BPM, each tick is rounded by 88.2 samples per tick.

So if you can round it to 1 sample per Quarter Note, or Eighth Note, or whatever you need, then that is lots closer than typical midi sequencer timing.

BertKoor's code will work fine. Also agreed it might be better to get tempo/time info from the host.

This kind of code, beware of bugs that will lose or gain a little bit of time on each loop. It is not rocket surgery, but fairly easy to slightly mess up the math so that the loop might gradually lose sync with the sequence. When the gain/loss is very tiny per loop, sometimes a sequence has to run awhile before the ear notices.

Post

JCJR wrote:Rounded to the sample is plenty close enough for noteon/off. That is lots closer than midi sequencer tick-based timing, and midi sequencer timing is usually close enough to be transparent if the PPQN (pulses per quarter note) is reasonably big.

For instance, many sequencers have used PPQN = 480. At 125 BPM, 480 PPQN, 1 minute contains 60000 ticks. Each tick has a duration of 1 ms. At a samplerate of 44.1 k, each 1 ms tick has a duration of 44.1 samples. So basically, even neglecting other slop in the sequencer, you could say that a 125 BPM, 480 PPQN MIDI sequencer is rounded by 44.1 samples per tick.

At 250 BPM, each tick is rounded by 22.05 samples. At 62.5 BPM, each tick is rounded by 88.2 samples per tick.

So if you can round it to 1 sample per Quarter Note, or Eighth Note, or whatever you need, then that is lots closer than typical midi sequencer timing.
Nice observation! Thanks!
JCJR wrote:Also agreed it might be better to get tempo/time info from the host.
The problem is that during the "play" of the DAW I need to manage internal plugin loop.

So for example, I'd like Start at PPQ 1 (1° beat) and loop will PPQ 3 (2 beats long). I can't deduce from Host PPQ value at each sample/frame, because they follow two different paths during play.

In DAW (playing the pattern, not the arrangment, which loop every 4 beats=1 bar) I'll have that at the 7°consecutive beat PPQ is 3 (4+3). Starting PPQ was 0.

Within my loop it will be 1 (2+2+2+1), starting PPQ=1.

I'm not sure I can stay in sync using the DAW ppq and internal looping, that's why I think I need my own incremental (which will drift in some points due to approx). But maybe am I wrong? Would be nice... :hihi:

Post

Hi Nowhk

Do you envision your generated notes or other musical events, always realtime? Or do you expect the user to sometimes record your output to a DAW midi track?

Just saying, depending on the way the DAW works, any MIDI type data recorded back into the DAW might get quantized to whatever PPQN the DAW uses? It is hard to generalize because maybe different DAWs work so different. Just something to think about. MIDI you loop back for recording into the DAW might get quantized to ticks. So as long as you are sending within the time-window of an individual tick in the DAW, the recorded results might be exactly the same?

There is that dichotomy in most DAWs, two time-bases. Tempo-based time typically measured in ticks, and Time-based time, which if not measured in samples, would at least be equivalent to samples. The two time-bases slide against each other and the goal is to keep them as close as possible in agreement.

Am guessing a DAW could be completely deterministic rendering to disk. Render the same song three times and get three exactly identical files. Maybe some DAWs are written so that they are entirely deterministic when playing real-time. Dunno.

In playback, if you have both tempo and time being resolved in real time, it might be difficult to be entirely deterministic. A "digital tape recorder" equivalent with no tempo-based features, would be easier to make entirely deterministic.

Juggling both time and tempo in realtime playback, with computer interrupts and such happening at different times on each playback, maybe it will be so close to the same on each playback that it sounds the same to the user. However, down at the sample or microsecond scale there might be some slop-- Tiny variations between each play.

It can depend on how ambitious you are about the kind of music you support. If 4/4 constant tempo thruout the song is all you intend to support (which is an entirely valid way to do it), then it could be straightforward. If you want to work properly even if the sequencer is following a tempo/bar map, potentially different time signatures during the song, and arbitrary tempo changes at any time in the playback, it gets confusing.

And maybe the sequencer is either permanently set to a long loop, or temporarily set to play a shorter loop. For instance, the user has a 150 bar song but sets his DAW to temp loop 5 bars somewhere in the song so he can work on a particular section. To properly loop along with the sequencer, and follow tempo and timesig changes if they exist in that section. Can hurt the brain.

Probably best to start simple and build from there, anyway.

Post

I totally forgot this topic! Sorry man: I'm not ignoring you :)

Yes, if I can round down to 1 sample (or some more) I think I'm ok, even because I'm noticing I can't tell the differences :)

So the right question now is: why I/we can't tell the differences? Is our ear system that "compansate" time during the listening? Or how is called this phenomenon? I don't think is auditory masking: thats about frequency domain. Any clues? Just curious :)

Thanks again for the help. Happy discussion ;)

Post

Any idea?

Post Reply

Return to “DSP and Plugin Development”