Machine Learning for VA filters?
-
- KVRAF
- Topic Starter
- 1618 posts since 13 Oct, 2003 from Oulu, Finland
How good results does machine learning give compared to a regular non-linear VA filter implementation? I.e. how much CPU do you need to use to have similar results / accuracy? (ML vs. VA)
Misspellers of the world, unit!
https://soundcloud.com/aflecht
https://soundcloud.com/aflecht
- KVRist
- 32 posts since 14 Jun, 2021 from Italy
Well, it depends. Not to be pedantic, but VA is a pretty huge term, and it encompasses a lot of different techniques, same goes for ML, so it's pretty hard to say which one is better for accuracy.
I'm not really sure how to implement such a filter with just a NN model, making it fast, dynamic, and stable even when modulated at audio rate, but my guess is that it could be much slower without much of an improvement in sound quality compared to a standard topology-preserving SVF.
Still a bit behind on this subject so take my words with a pinch of salt, I'm still learning
I'm not really sure how to implement such a filter with just a NN model, making it fast, dynamic, and stable even when modulated at audio rate, but my guess is that it could be much slower without much of an improvement in sound quality compared to a standard topology-preserving SVF.
Still a bit behind on this subject so take my words with a pinch of salt, I'm still learning
-
- KVRAF
- 2490 posts since 15 Apr, 2004 from Capital City, UK
I wonder if anyone is doing something like Chirs Airwindows is doing with his reverb matrix experiments, in an attempt to get close to some Bricasti reverbs..
Would it be possible to program a ML environment to perform generative experiments on filter topologies, adjusting the kinds of values and routings which are fine-tuned by developers to hone in on the sound they seek.. that could produce some interesting results.
Would it be possible to program a ML environment to perform generative experiments on filter topologies, adjusting the kinds of values and routings which are fine-tuned by developers to hone in on the sound they seek.. that could produce some interesting results.
-
- KVRist
- 89 posts since 21 Jul, 2022
I'm not an expert on this. But to me it looks like an misunderstanding of what machine learning is.
To my understanding you use the same type of algorithms, with parameters/tables to tweak by machine learning. Machine learning will only trim parameters and parameter tables tables for your algorithm. And maybe compare different algorithm implementations for you if you set it up like that.
And preferably some kind of like dislike option for human input. So you can make some kind of breeding selection to get you in the right direction.
So it depends on how good algorithms you feed to it, compared to what code you would write otherwise. But when you get the optimal tables from the machine learning, then you could probably test on making some faster algorithm that approximates and make similar results with higher resolution, and compare some different stuff and choose the one that sounds best.
But of course you can use algorithms that would be almost unusable for humans to set up the parameters for, that could be trimmed for fast execution.
- KVRist
- 32 posts since 14 Jun, 2021 from Italy
AFAICT that's how Neural Amp Modeler and similars work, a fixed configuration of filters and nonlinearities of which coefficients/constants are "bruteforced" with a neural network, to closest match the input material.CinningBao wrote: ↑Mon Jan 08, 2024 11:51 am Would it be possible to program a ML environment to perform generative experiments on filter topologies, adjusting the kinds of values and routings which are fine-tuned by developers to hone in on the sound they seek.. that could produce some interesting results.
-
- KVRer
- 24 posts since 12 Aug, 2018
As mentioned, VA is a collection of techniques, as is ML. Jatin had a nice overview of white-box and black-box methods for ADC23, but the video hasn't been uploaded yet. See also his paper: https://ccrma.stanford.edu/~jatin/papers/Klon_Model.pdf
NAM is black-box modeling, i.e. using a NN (e.g. LSTM) to model the entire system without consideration for what's inside.
Imo, the most interesting methods are grey-box techniques that combine VA modeling and machine learning. These have started to pop up recently.
No, parameter learning in this manner is DDSP: https://intro2ddsp.github.io/intro.html
NAM is black-box modeling, i.e. using a NN (e.g. LSTM) to model the entire system without consideration for what's inside.
Imo, the most interesting methods are grey-box techniques that combine VA modeling and machine learning. These have started to pop up recently.
- KVRist
- 32 posts since 14 Jun, 2021 from Italy
Thanks for the correctionArchit3ch wrote: ↑Mon Jan 08, 2024 2:29 pm No, parameter learning in this manner is DDSP: https://intro2ddsp.github.io/intro.html
NAM is black-box modeling, i.e. using a NN (e.g. LSTM) to model the entire system without consideration for what's inside.