Is there a plugin/tool that uses "machine learning" to help program synths?

How to make that sound...
RELATED
PRODUCTS

Post

Here's what I'm looking for: Essentially a VST plugin where I can set up midi mappings for a synth's parameters that I want to edit, then I give the plugin some basic starting parameters/constraints (high vs mid vs low freq, long vs short sound, noise vs tone) and then it will essentially apply some smart randomization, let me audition the sound, then "tweak" the sound at a meta-level by essentially giving the result a rating like thumbs up or down. The plugin then makes various modifications to the underlying controls, "paying attention" to the directions of change its made in the past and if those directions and magnitudes resulted in a thumbs up or thumbs down...

If you're aware of Sonic Charge's Synplant, I'm thinking of something along those lines, but more powerful, and one that can be "mapped" to any synth's CC parameters.

Obviously this would involve a fair bit of initial mapping, but I think this kind of thing would be VERY useful, andI can't believe I'm not finding anything like it.

I've seen machine learning apps where you feed it a sound sample and it will essentially synthesize that sound (like Google's thing), but I've not seen anything like that in terms of a "guided" sound programming app.

Does it not exist, or am I just looking in the wrong places?

Post

I think its called "the factory presets" ... :hihi:

Post

ai! :o
i told you!!!!

Post

fwiw orion used to have a preset breeder, you'd choose two presets and it would do various mixes of the parameters.

Post

vurt wrote:fwiw orion used to have a preset breeder, you'd choose two presets and it would do various mixes of the parameters.
only works on internal synths (mostly) due to over engineered preset browsers

Post

ah, not used it for a while so couldn't remember either way :lol:
thanks for clarifying though :)

Post


Post

It depends on the hypermap you are creating with your ratings. Each rating sample you give has the coordinates of the parameters it is connected to, and how the hypermap is being represented to best fit your ratings in the parametric hyperspace that connects your ratings to the specific outset.

I think it might be easiest to simply interpolate between presets along a slider, repeatedly. This way you can linearly navigate the hyperspace through known points (presets). Which is a lot faster for your goals anyways. Each synth is different.
SLH - Yes, I am a woman, deal with it.

Post

I have to deal with high-dimensional optimisation problems in my day job, so maybe I can give a bit of insight here. First of all I think Vertion is right that preset morphing is a useful technique here. But I'd add that preset morphing doesn't work for all engines and parameters. In order to make interpolation work you need to reduce the control set to a bunch of macro controls that behave somewhat sensibly and predictably. And at that point it's not so hard for the user to just use those directly...

A decent algorithm that sort of matches what OP describes would be Simplex. You'd give the user a set of samples to try, they pick the "worst". We try to obtain a better sample by "reflecting" that point in the hyperplane of other points. If the result is still the worst then we replace it with a point half-way closer to the hyperplane. The good thing about Simplex is you can use it with just about any "black box" function with minimal output. There are two big downsides however: it's a bit slow, and you need to compare N+1 samples right at the start. So for practical purposes we could only use this for a very limited number of synth parameters (say 4?) or it becomes difficult for the user to pick the worst. Which is something they will have to do quite a few times.

In real-world optimisation problems we can get good results with fewer steps by using gradient information. This means that not only do we know which points are closest to the ideal solution, we also know which direction of change looks more promising. For a synth programmer, this is equivalent to giving feedback of "brighter/darker", "purer/richer", "smoother/snappier" etc where those things have already been mapped to parameters. But that's pretty much a description of how we already program a synth that has macro controls!

Post

A solution could be created without any machine learning, rather just a user-guided walk through the parameter space.

You would have a point in the space, a vector representing the trajectory, and some alternative trajectories with small random changes presented to you each step. Maybe some control loop that manages random variance when you seem to be heading towards the sound you want, or some acceleration when you choose to stay on the current trajectory.

There is the 'curse of dimensionality' phenomenon that plagues gradient descent systems. YOU are providing the error grandient with your approval feedback (differentiated each step), most of the process would be spent listening to the same sound with many, many small changes.

This is what you seem to be describing anyway

Visually, we can have a number of sliders showing the state (position,velocity,confidence), and maybe some silly higher dimensional geometry or Chernoff faces.

The question I find myself asking is whether this is a job for a plugin, or a host?

I also think if this is worth doing, then it's worth making a long-term map of approval and allow for real-time morphing along a hypersurface according to the very best parameter trajectories.

Post

To clarify what I wrote 'Repeated interpolation berween presets.' I don't mean just moving a slider between 2 points repeatedly. It means starting with 2 points, moving the slider a bit to create an interpolated point (current point), and then changing one point to a new preset point, and replacing the second point with the current point. Thus skating around hyperspace. Additionally you can also use random by radius of current instead of selecting a preset, to simply move in a random direction. It's also possibly to extrapolate by sluding past the preset point.

Point A (Source) to Point B (Destination ) in reiterated interpolations.

_______________
Looking at the other comments: Hyperplanes can be useful (2 point distance function or thresholded weighted aggregation ), but it really depends on the texture of the hyperspace map. Depending on the level of detail, one could pull off high detail levels with an aggregate set of quantized noise (stochastic optimization history).
SLH - Yes, I am a woman, deal with it.

Post

Vertion wrote: Tue Oct 16, 2018 8:00 pm Thus skating around hyperspace. Additionally you can also use random by radius of current instead of selecting a preset, to simply move in a random direction. It's also possibly to extrapolate by sluding past the preset point.
What do you mean by 'radius of current'?

I think random walking is necessary, the OP seems to want to set specific MIDI CC parameters rather than meddle with the plugin parameters themselves (which would offer presets as known safe points) unless I misunderstood.

If you did have a set of points that you could 'skate' between then that does sound like the most enjoyable approach - everything you get would be bounded by the convex hull of the approved points. Maybe you could specify some bad sounds as no-go zones to skate around.

Post

Slider interpolates between point A and point B.

Initialize A and B to random points.
Step 1: User replaces A with Preset 8, and B with Preset 11.
Step 2: User slides slider to some point between A and B.
Step 3: User replaces A with slider point, and B with Preset 27.
Step 4: User slides slider again to some point between A and B.
Step 5: User replaces A with slider point and B with slider point plus random offset (restricted to a radius set by user).
Step 6: User slides slider somewhere between point A and B.
_____________
Steps 2, 3, and 4 repeatedly means skating around using known points.

Strps 5 and 6 represent a random walk from the current point (slider) in hyperspace.

The max radius set by the user is a hypersphere around the current point, or can instead be a box radius if you want to simplify. To do a hypersphere, make a random point in all hyperspace, find the slope to the currect point, normalize to radius and then diminish it with a random number 0.0 to 1.0 along the slope, the point is now contained within the bounds of perfect radius of the hypersphere, except for hyperspace bounds of course.
SLH - Yes, I am a woman, deal with it.

Post

What is described here isn't so much AI, its more about evolution. And that makes sense - evolutionary machine learning.
I could imagine to give the algorithm a human made (intelligent) starting point, and then let some instances play against each other... It would require none-realtime calculation of the sounds to compare them to the target...
It wouldn't be a plug-in, but would need to load plug-ins...
I bet its possible with existing technology...

Post

What you're looking for is something we used to have back in the early days of the Casio CZ1 synth and the Yamaha DX7. There was a very clever program called "DX Android" and one called "CZ Android" that did exactly what you're looking to do. More often than not you'd get unusable garbage...but, then a gem would show up that with a bit of tweaking would turn into something quite useful. All at the click of a button. As I recall, the programs were written for the old Atari ST desktop computers.

The only thing close in today's world I know if Patch Morpher. This might be what you're looking for.

Post Reply

Return to “Sound Design”