Where is a synthesizer that can "listen" to another synth and then recreate that sound?
- KVRAF
- 2131 posts since 22 Sep, 2016
I think it is an interessting idea. Just typing what comes to my mind ... If you want to recreate an sound perfectly then it's quite a task to do, because your training set has to be hugh. What do I mean by that - You have to take into consideration that you have quite a lot of parameters like note, velocity, modulation wheel, pedal, note lenght etc. which might change the sound. So for faithfully recreating the source sound you need to go through all notes multiplied by all velocities multiplied by all modulation wheel setting multiplied by ... to name just a few dimensions. And then you have to choose parameters are adjusted in your target synth. I would say what you probably want is really deep sampling. Just my 2 cents without thinking too much.
-
- KVRAF
- 3477 posts since 27 Dec, 2002 from North East England
Project Magenta is doing interesting work in this area combining machine learning with additive synthesis. The style transfer audio demos aren't exactly 'natural', but they're enormously impressive nontheless.
https://magenta.tensorflow.org/ddsp
https://magenta.tensorflow.org/ddsp
-
- Banned
- 3889 posts since 3 Feb, 2010
now this is awesomecron wrote: ↑Sat Jan 25, 2020 6:58 pm Project Magenta is doing interesting work in this area combining machine learning with additive synthesis. The style transfer audio demos aren't exactly 'natural', but they're enormously impressive nontheless.
https://magenta.tensorflow.org/ddsp
- Banned
- 3564 posts since 22 Aug, 2019
It might be surprising what solutions and tricks a program comes up with in order to recreate a sound on another synth that uses a very different type of synthesis.
Reminds me of the Go computer program that came up with moves humans had never thought of before.
Reminds me of the Go computer program that came up with moves humans had never thought of before.
- KVRAF
- 1748 posts since 2 Jul, 2018
- Rad Grandad
- 38044 posts since 6 Sep, 2003 from Downeast Maine
any further attacks will be deleted
The highest form of knowledge is empathy, for it requires us to suspend our egos and live in another's world. It requires profound, purpose‐larger‐than‐the‐self kind of understanding.
-
Distorted Horizon Distorted Horizon https://www.kvraudio.com/forum/memberlist.php?mode=viewprofile&u=392076
- Banned
- 3882 posts since 17 Jan, 2017 from Planet of cats
I put Bones in my ignore list. Is it possible to hide that notification of "BONES, who is currently on your ignore list, made this post"?
I'm just happier if I don't even know about his existence.
I'm just happier if I don't even know about his existence.
- Rad Grandad
- 38044 posts since 6 Sep, 2003 from Downeast Maine
I'm sorry, there is no way to stop you from seeing other peoples quotes and if they quote someone on your mute list there's not much that can be done.
The highest form of knowledge is empathy, for it requires us to suspend our egos and live in another's world. It requires profound, purpose‐larger‐than‐the‐self kind of understanding.
-
thecontrolcentre thecontrolcentre https://www.kvraudio.com/forum/memberlist.php?mode=viewprofile&u=76240
- KVRAF
- 35244 posts since 27 Jul, 2005 from the wilds of wanny
Back on-topic ... we're looking for a Mike Yarwood / Rory Bremner plugin yeah? Seems like a real lazy arsed way of making sounds to me.
-
- addled muppet weed
- 106097 posts since 26 Jan, 2003 from through the looking glass
years ago, working for a homeless charity, rehousing people. we would take furniture donations.thecontrolcentre wrote: ↑Sun Jan 26, 2020 5:00 pm Back on-topic ... we're looking for a Mike Yarwood / Rory Bremner plugin yeah? Seems like a real lazy arsed way of making sounds to me.
mike yarwood dropped in all his dads old furniture when he died.
- KVRAF
- 7691 posts since 11 Jun, 2006
i like zampler. low cpu, feature set, workflow and the gui.....Distorted Horizon wrote: ↑Sat Jan 25, 2020 5:18 amWhy not using this?
https://www.discodsp.com/bliss/
Or this?
https://www.tx16wx.com/
best of all, still works in WinXP!
HW SYNTHS [KORG T2EX - AKAI AX80 - YAMAHA SY77 - ENSONIQ VFX]
HW MODULES [OBi M1000 - ROLAND MKS-50 - ROLAND JV880 - KURZ 1000PX]
SW [CHARLATAN - OBXD - OXE - ELEKTRO - MICROTERA - M1 - SURGE - RMiV]
DAW [ENERGY XT2/1U RACK WINXP / MAUDIO 1010LT PCI]
HW MODULES [OBi M1000 - ROLAND MKS-50 - ROLAND JV880 - KURZ 1000PX]
SW [CHARLATAN - OBXD - OXE - ELEKTRO - MICROTERA - M1 - SURGE - RMiV]
DAW [ENERGY XT2/1U RACK WINXP / MAUDIO 1010LT PCI]
-
Friendly Noise Friendly Noise https://www.kvraudio.com/forum/memberlist.php?mode=viewprofile&u=466625
- KVRist
- 189 posts since 25 May, 2020
I can confirm that the signal going through the Panharmonium will be heavily processed and not very close to the original. But if “very close” is not your goal, the Panharmonium is a great source of sounds processing live feed. On the clip below, the resulting sound is far away from the incoming full mix audio, but that was made on purpose, trying to get entirely new sounds from a given sound source. Link to the original mix in the video page, in case you want to compare original and processed sound.deastman wrote: ↑Tue Jan 21, 2020 5:13 amI have one. Yes, it is basically a real-time additive resynthesizer, but the reconstructed signal isn’t very close to the original. For non-realtime, several plugins have already been mentioned. I particularly like Arturia’s Synclavier V.werp wrote: ↑Tue Jan 21, 2020 4:37 am This thing maybe? It’s a recently released Eurorack module by the dude who came up with the Emulator synths.
http://www.rossum-electro.com/products/panharmonium/
https://youtu.be/vvaztGinvgs
-
Friendly Noise Friendly Noise https://www.kvraudio.com/forum/memberlist.php?mode=viewprofile&u=466625
- KVRist
- 189 posts since 25 May, 2020
- Banned
- 7624 posts since 13 Nov, 2015 from Norway
-
- KVRian
- 629 posts since 12 Sep, 2007
YOU are the synthesizer to do that.
Seriously that's a thing humans will always do better than AI.
And the reason is there are too many variables to try and capture.
My suggestion is keep learning synthesis, and let your ears do the walking.
Seriously that's a thing humans will always do better than AI.
And the reason is there are too many variables to try and capture.
My suggestion is keep learning synthesis, and let your ears do the walking.