Modulation matrix implementation strategies...

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

I've been trying to design a modulation matrix for my synth and wondering on the best way to implement it, especially given that some modulation sources are synth-based (mod wheel, pitch bend, LFOs, etc) and some are voice-based (key, velocity, envelopes, etc).

I'm thinking that there would be an array of modulation slots...a slot being defined by:

Target(s)
Multiplier
Source1
Source2

It would seem that there needs to be a higher, synth-level object handling some of this, and still a voice-level object actually implementing it. And that's where it starts to sound somewhat messy.

I'm guessing that the synth-level sources would be rendered once per block and before each voice is rendered. But then rendering each voice would be a lot of ifs and switches because each slot could contain a combination of synth-level mod sources and voice-level mod sources.

It also seems like there could be duplicated info about the modulation slots in the synth object as well as all the voice objects. Maybe there's another way to think about this.

Any thoughts on how to possibly organize this in a somewhat neat and object oriented way?

Post

In MPE at least pitch bend and cc74 is also voice based... Any cc could be voice based... In VST3 expressions are voice based... The MPE specs determine the voice by Midi channel, that way voice allocation is done by the controller...; - )

Post Reply

Return to “DSP and Plugin Development”