Machine Learning for VA filters?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

How good results does machine learning give compared to a regular non-linear VA filter implementation? I.e. how much CPU do you need to use to have similar results / accuracy? (ML vs. VA)
Misspellers of the world, unit!
https://soundcloud.com/aflecht

Post

Well, it depends. Not to be pedantic, but VA is a pretty huge term, and it encompasses a lot of different techniques, same goes for ML, so it's pretty hard to say which one is better for accuracy.
I'm not really sure how to implement such a filter with just a NN model, making it fast, dynamic, and stable even when modulated at audio rate, but my guess is that it could be much slower without much of an improvement in sound quality compared to a standard topology-preserving SVF.
Still a bit behind on this subject so take my words with a pinch of salt, I'm still learning :)

Post

I wonder if anyone is doing something like Chirs Airwindows is doing with his reverb matrix experiments, in an attempt to get close to some Bricasti reverbs..

Would it be possible to program a ML environment to perform generative experiments on filter topologies, adjusting the kinds of values and routings which are fine-tuned by developers to hone in on the sound they seek.. that could produce some interesting results.

Post

Kraku wrote: Thu Jan 04, 2024 9:23 pm How good results does machine learning give compared to a regular non-linear VA filter implementation? I.e. how much CPU do you need to use to have similar results / accuracy? (ML vs. VA)
I'm not an expert on this. But to me it looks like an misunderstanding of what machine learning is.

To my understanding you use the same type of algorithms, with parameters/tables to tweak by machine learning. Machine learning will only trim parameters and parameter tables tables for your algorithm. And maybe compare different algorithm implementations for you if you set it up like that.

And preferably some kind of like dislike option for human input. So you can make some kind of breeding selection to get you in the right direction.

So it depends on how good algorithms you feed to it, compared to what code you would write otherwise. But when you get the optimal tables from the machine learning, then you could probably test on making some faster algorithm that approximates and make similar results with higher resolution, and compare some different stuff and choose the one that sounds best.

But of course you can use algorithms that would be almost unusable for humans to set up the parameters for, that could be trimmed for fast execution.

Post

CinningBao wrote: Mon Jan 08, 2024 11:51 am Would it be possible to program a ML environment to perform generative experiments on filter topologies, adjusting the kinds of values and routings which are fine-tuned by developers to hone in on the sound they seek.. that could produce some interesting results.
AFAICT that's how Neural Amp Modeler and similars work, a fixed configuration of filters and nonlinearities of which coefficients/constants are "bruteforced" with a neural network, to closest match the input material.

Post

As mentioned, VA is a collection of techniques, as is ML. Jatin had a nice overview of white-box and black-box methods for ADC23, but the video hasn't been uploaded yet. See also his paper: https://ccrma.stanford.edu/~jatin/papers/Klon_Model.pdf
gambero wrote: Mon Jan 08, 2024 12:54 pm AFAICT that's how Neural Amp Modeler and similars work, a fixed configuration of filters and nonlinearities of which coefficients/constants are "bruteforced" with a neural network, to closest match the input material.
No, parameter learning in this manner is DDSP: https://intro2ddsp.github.io/intro.html
NAM is black-box modeling, i.e. using a NN (e.g. LSTM) to model the entire system without consideration for what's inside.

Imo, the most interesting methods are grey-box techniques that combine VA modeling and machine learning. These have started to pop up recently.

Post

Archit3ch wrote: Mon Jan 08, 2024 2:29 pm No, parameter learning in this manner is DDSP: https://intro2ddsp.github.io/intro.html
NAM is black-box modeling, i.e. using a NN (e.g. LSTM) to model the entire system without consideration for what's inside.
Thanks for the correction :tu:

Post Reply

Return to “DSP and Plugin Development”