Parameter input range

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

An example of breaking previously saved plugin states is a plugin changing it's internal mapping of a fixed range input (0.f - 1.f for example). If a plugin dev wanted to change their mapping, for example to extend the range of a filter cutoff or a delay line, how could they do this without breaking previously saved states? One quick idea is to allow that internal mapping to be selectable by the user, one to be compatible with the old plugin version, and the new to be the redesigned range. But would doing that just be a cludge and pose more problems in the future?

Post

I don’t know what others do. But I personally do this by implementing versioning in the saved states/presets. So when opening a preset, if the saved version is old, remap the saved value to the new range, that is, change the parameter position so that it gives the same old saved sound. If the saved version is new, then just apply the value obviously.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Sun Mar 31, 2024 3:15 am I don’t know what others do. But I personally do this by implementing versioning in the saved states/presets. So when opening a preset, if the saved version is old, remap the saved value to the new range, that is, change the parameter position so that it gives the same old saved sound. If the saved version is new, then just apply the value obviously.
This works.. until you consider automation. If someone has a project where a bunch of filter cutoff automation is stored on the track, now it's suddenly all wrong.

I don't think there's really an easy solution for this though. Some compability options (eg. "use old filter range") that are automatically set based on loading older presets would be more robust, but also perhaps kinda annoying clutter.

Post

Don't save the parameters using normalized values (0 - 1), but their actual values.
My audio programming blog: https://audiodev.blog

Post

kerfuffle wrote: Sun Mar 31, 2024 9:07 am Don't save the parameters using normalized values (0 - 1), but their actual values.
Yes - but that doesn't solve the problem with automation data. I'm currently studying the CLAP-API a bit closer and it has something to say about this problem:
The CLAP Documentation wrote: There are two approaches to automations, either you automate the plain value, or you automate the knob position. The first option will be robust to a range increase, while the second won't be. If the host goes with the second approach (automating the knob position), it means that the plugin is hosted in a relaxed environment regarding sound changes (they are accepted, and not a concern as long as they are reasonable).
https://github.com/free-audio/clap/blob ... t/params.h

...apparently, with CLAP, hosts can opt to store automation data as plain values rather than normalized ones. However - that is up to the host. So - changing (increasing) a plugin's parameter range may be unproblematic in some hosts and problematic in others. ...decreasing the range, on the other hand, will probably always be problematic (I guess).
Last edited by Music Engineer on Mon Apr 01, 2024 11:25 am, edited 1 time in total.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

mystran wrote: Sun Mar 31, 2024 6:10 am
S0lo wrote: Sun Mar 31, 2024 3:15 am I don’t know what others do. But I personally do this by implementing versioning in the saved states/presets. So when opening a preset, if the saved version is old, remap the saved value to the new range, that is, change the parameter position so that it gives the same old saved sound. If the saved version is new, then just apply the value obviously.
This works.. until you consider automation. If someone has a project where a bunch of filter cutoff automation is stored on the track, now it's suddenly all wrong.

I don't think there's really an easy solution for this though. Some compability options (eg. "use old filter range") that are automatically set based on loading older presets would be more robust, but also perhaps kinda annoying clutter.
That’s very true. Though sometimes there is good space for a new range control/switch in UI. At least automatic resort to old range won’t be so confusing to the user.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

mystran wrote: Sun Mar 31, 2024 6:10 am
S0lo wrote: Sun Mar 31, 2024 3:15 am I don’t know what others do. But I personally do this by implementing versioning in the saved states/presets. So when opening a preset, if the saved version is old, remap the saved value to the new range, that is, change the parameter position so that it gives the same old saved sound. If the saved version is new, then just apply the value obviously.
This works.. until you consider automation. If someone has a project where a bunch of filter cutoff automation is stored on the track, now it's suddenly all wrong.
Hm...I know it works with Pro Tools, both TDM and AAX. I think it's the same with VST/AAU, but don't think I've had to do this in particular...

I mostly remember why I had to do it for Amp Farm. More amp models in version 2 and later, and addition of many convolution cabinets in version 3 and later, from the original that had a set of IIR-based cabinets. TDM control states were signed 32-bit, so -1 to +1.

The plugin state is initialized with a chunk, the chunk contains the plugin version of the automation data. A little hazy on the details, but I know all versions, including the native AAX, can read automation from any version. That was an important design consideration for the much-later native version that might be dropped into an old TDM.

Don't recall offhand how it would deal with adding to automation in an existing session with a newer plugin version, but I don't recall that being an issue.

Like I said, hazy on details, but the AF chunk/automation parameters at 32-bit signed -1/1 range, the values need to be remapped on the fly, and it would have been catastrophic to release a new version that didn't track existing automation correctly.
My audio DSP blog: earlevel.com

Post

Actually, Its more often that I change the internal curve used to map the parameter than changing range itself. It’s the curve that I more often have second thoughts about after using it frequently and realizing that’s it not that practical. And it’s so frustrating when that happens, I feel like, “what was I thinking?”. (Edit: obviously same problem with modulation there)

The range, I remember changing maybe once (or twice max, can’t be sure). In one case I’ve added a new range switch control.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

Great insights so far, thanks. So if CLAP is able to represent the plain (unnormalized) value with limits, I assume these limits can be changed by parameter re-initialization. I also assume that if the plain value was used, it could be remapped internally the the plugin (or at the least "recurved"), and still keep it's design limits. The knob-value method has me wondering now on it's range, I would again assume it is normalized, or at the most re-scalable via the appropriate slope equation. As rob said, reducing the range of a parameter after established automation may pose a problem if that automation attempts to go beyond the newly set limits. Also, having automation that doesn't fit the expected VALUES would pose a problem. There is lots to think about, I wouldn't mind testing some things but I am unsure what to use.

Post

Well.. you can obviously clip incoming automation values, so it's mostly just a case of any automation using the older larger range would then sound wrong.. though I'm not entirely sure why one would want to reduce ranges. The bigger problem is that if the user draws a straight-line in the host automation editor, you want probably want your cutoff to sweep from 20Hz to 20kHz in a linear fashion on log-scale and not linear in the actual numerical values... so you'd need to let the host know about your tapers and trust that it does the right thing.

The "knob position" approach generally "just works," no questions asked... but only if you get the parameter ranges and tapers right the first time. This gets a lot easier with new projects when you've went through the pains of having to change them in your old ones, 'cos you develop a habbit of thinking "are my parameter ranges good enough to be final" before you release stuff. ;)

Personally though, I find that the more common situation is wanting to extend lists of selectable things.. like perhaps we want to add additional filter models to a synth or another waveform to an oscillator or whatever... and here I think it's quite handy to be able to store the actual menu indexes and be able to grow the range (though with a bit of forethought you can prepare for this even with just normalized parameters.. like convert all selections to [0,255] or whatever range, convert that to the normalized parameter and then clip again to the actul range when converting back; now you don't have a problem until you have more than 255 values and.. you could reserve more if you really wanted to)... though one could perhaps live without exposing such parameters at all.. but sometimes it's handy to automate that stuff too.

There is actually one more way to preserve automation even if you change your parameter range: make the parameter with a new range a new parameter... and then make the old one a "phantom" that just controls the new parameter through a remapping function. The main problem with this approach though is that if a user loads a project with old automation and then records more automation with the new parameter, now they have two automation tracks for the same (logical) parameter mapping to different parameter ranges, which I guess might cause some support trouble.

Post

I'm just posing some ideas...it's late...

It just occurred to me—it's been a little while since I've been in the think of this...but I think automation is a separate issue from storing preset chucks.

That is, you can store your parameters normalized, like basic controls deal with (0.0-1.0, for instance), or unnormalized (0-20000 Hz, or whatever), which may follow a curve. Obviously this is not a problem. The chunks carry a version field, so an LFO control on the UI can go from having four choices in version 1 to six in version 2. No sweat.

So the concern here is automation. But that's really an issue with the host to resolve, not the plugin. If the plugin stores normalized values, then it's up to the host to re-evaluate settings (via the plugin interface) as needed.

Using VST3 as an example (I can't share the Avid dev docs), from VST 3 Developer Portal:
• Changes in future plug-in versions

Take a discrete parameter, for example, that controls an option of three choices. If the host stores normalized values as automation data and a new version of a plug-in invented a fourth choice, the automation data will be invalid now. So either the host has to store denormalized values as automation or it must recalculate the automation data accordingly.
To "recalculate", of course, it must query the plugin.

I need to cut this short. But as I said, I updated a widely-used Pro Tools plugin over several versions that had changes in parameter ranges, and had no difficulty in supporting back-versions of automation data.

As far as storing "plain" (VST3 lingo) or normalized for your preset, do whichever you prefer, and manage changes with the chunk version. Automation will take care of itself, because the host has to keep that square.
My audio DSP blog: earlevel.com

Post

You NEVER should exceed the [0;1] range for parameters! If you do not stick to this rule it may work in certain DAWs, but will fail in others.
If you want to safely keep downward compatibility while beeing able to extend the range you must use your own format chunks with a versioning number and adapt the range with the import of the old patches/songs.
https://www.tone2.com
Our award-winning synthesizers offer true high-end sound quality.

Post

Tone2 Synthesizers wrote: Mon Apr 01, 2024 2:37 pm You NEVER should exceed the [0;1] range for parameters! If you do not stick to this rule it may work in certain DAWs, but will fail in others.
I don't think that the OP is talking about extending the normalized parameter range. That one is fixed to 0..1 - that is understood. We are talking about the mapped range, i.e. the range of the "plain" value. ...we are not talking about letting a knob go up to eleven :lol:
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

I want to argue more about the established 0.f - 1.f range as being sub-optimal in it's usage of available floating point bits, but I would need to do some example calculations first to support my statements. From memory, this range (in SP FP) is utilizing 24-bits, and modulating only 30 (or 31) bits in total. This is something to consider when the normalized range is scaled.

Post

camsr wrote: Mon Apr 01, 2024 10:04 pm I want to argue more about the established 0.f - 1.f range as being sub-optimal in it's usage of available floating point bits, but I would need to do some example calculations first to support my statements. From memory, this range (in SP FP) is utilizing 24-bits, and modulating only 30 (or 31) bits in total. This is something to consider when the normalized range is scaled.
If you want a linear range, it doesn't really matter. Near 0 you have lots of precision, near 1 you have the 23 bits worth of mantissa. You can effectively gain one bit of precision by using [-0.5,+0.5] instead... but seriously if you need that much precision for your parameters.. then... idk..

There are 100 cents to a semitone, 1,200 cents to an octave. If we wanted 10 octave range with 1 cent precision that's 12,000 datapoints which you can handle with a 14-bit MIDI parameter. If you want super-accurate beat frequencies or something, then you can always add another parameter for that (which would be better from the UI standpoint anyway). For anything not related to pitch, you need way less than that.

Float precision mostly becomes an issue when you compute something like exp(-freq/samplerate) for very low frequency because we get progressively closer and closer to one, so we have a binary representation that's like 0.1111... and your "useful information" is basically stores as "number of ones after the decimal point" which is obvious capped to the size of the mantissa.. and basically the same thing with something like cos(verysmall) which is another typical precision killer. Even single-precision float precision is rarely an issue for anything audio unless you have one of these "very close to but not quite 1" (or some other number, but usually it's 1) type of deals. Sometimes these are unfortunately unavoidable, but if your parameter mapping involves numbers being almost but not exactly some value, then you should probably rethink other things first before worrying about float precision.

Post Reply

Return to “DSP and Plugin Development”