MCharacter: imposing the harmonics from an auxiliary sample

Official support for: meldaproduction.com
RELATED
PRODUCTS

Post

I wonder if there is the possibility (namely with MCharacter) to open an auxiliary sample, analyse its waveform automatically converting it into a spectrum, that is in a set of harmonics levels, and apply it, for instance, to the "Levels" or to the "Synthesis" section of MCharacter in order to superimpose (by amplification of existing frequencies and/or synthesis) the typical harmonics of that auxiliary sample to the original main sound entering the plug-in.
Could I do this in MCharacter? And in MTransformer?

Maybe (more complicated)… could a clever use of a modulator do this for me?

Also: I have seen the description of the "Custom sample panel" in the manuals of MCharacter and MTransformer and I was wondering if the "Custom sample" functionality could help me to achieve my purpose, but I have not been able to find that panel in the plug-ins GUI. Could you please give me a step-by-step help to find it? And would it help me in achieving my purpose?

Post

What is this custom sample you refer to? Where did you read it exactly? I've not heard of this.
Melda Production & United Plugins
Surface Studio = i7, 32gb, SSD.
Windows 11. Bitwig, Reaper, Live. MTotal.
Audiofuse, Adam Audio monitors + sub, iLoud MTM.
Polybrute, Summit, Pro 3, Tempest, Syntakt, AH2.
Ableton Push 2, Roli Seaboard Block.

Post

I read it on page 42 (out of 98 pages) of the MCharacter manual, paragraph "Custom sample panel". There is the same identical paragraph on page 45 (out of 101 pages) of the MTransformer manual.

Post

Oh, I see. Now I know what you are referring to. Well this custom sample is about the OSC shape in the modulators. Sadly nothing to do with what you are trying to achieve.
Melda Production & United Plugins
Surface Studio = i7, 32gb, SSD.
Windows 11. Bitwig, Reaper, Live. MTotal.
Audiofuse, Adam Audio monitors + sub, iLoud MTM.
Polybrute, Summit, Pro 3, Tempest, Syntakt, AH2.
Ableton Push 2, Roli Seaboard Block.

Post

Thank you. Anyway, I think that it would be a nice idea to implement this possibility.

Post

Yes, it certainly has my interest.
Melda Production & United Plugins
Surface Studio = i7, 32gb, SSD.
Windows 11. Bitwig, Reaper, Live. MTotal.
Audiofuse, Adam Audio monitors + sub, iLoud MTM.
Polybrute, Summit, Pro 3, Tempest, Syntakt, AH2.
Ableton Push 2, Roli Seaboard Block.

Post

MSpectralDynamics mihht get you there as the sidechain frequencies can be applied to the channel
Amazon: why not use an alternative

Post

Thank you. I don't own MSpectralDynamics, I am sure that what you suggest would be interesting, though I suppose that it would sound more like a dynamic Match EQ (MAutoDynamicEQ, which I have, can also do something of that kind very well). Working with harmonics would be different, because frequencies are fixed, while harmonics are relative to the fundamental, so the two sounds involved in the process would not need to be matched in their pitch in advance and the result should sound more like an instrumental timbral change.

Of course we have MVocoder and MMorph for timbral hybrids and I own them, but I think that it would be nice also to have this other possibility.

Post

A post from another person (viewtopic.php?f=138&t=515921) made me try again, this time by using the sidechain feature, but without success.

So, I still have a request.
A "capture" feature, capable of detecting the average spectrum of another sample in order to apply it to your new signal (as harmonics, thus following the pitch changes) would really be very helpful!

Post

I have asked for a similar function before, what I believe you want is a spectral version of mphatik. It needs to analyse the side chain and spectrally increase or decrease the volume of each spectral band in the main input to MATCH, this is similar to mmorph except mmorph doesn't match the two, it's just a spectral level follower.

Ideally you'd want a spectral mphatik with the ability to analyse a file rather than a live input in the side chain.

When I've asked for this in the past, Vojtech said it was interesting but time is obviously very limited. Enough people have asked for something similar though!

Post

Vectorwarrior. We have both been wanting this spectral phatik for a while. Hopefully, it will one day come to light. I think it should be added as a switch to MMorph, it could change the mode form level follower to level match. Also, a pitch follows morph would be nice.

Anyway, what I wanted to ask you was to explain to me in a basic way what the difference is between how MPhatik matches two levels and a level follower does it. I am struggling to understand how they are different.
Melda Production & United Plugins
Surface Studio = i7, 32gb, SSD.
Windows 11. Bitwig, Reaper, Live. MTotal.
Audiofuse, Adam Audio monitors + sub, iLoud MTM.
Polybrute, Summit, Pro 3, Tempest, Syntakt, AH2.
Ableton Push 2, Roli Seaboard Block.

Post

Pitch following is important for what I would like to have. A simple example: suppose that we have a melody played by a flute on one track and a completely different melody played by a clarinet on another track. I would like to capture the spectrum of one note of the clarinet and then, in a way similar to what MCharacter does, to apply its features to each note of the melody of the flute in order to make it more "clarinet-like" (which requires pitch following).

Post

Yeah, I agree with your suggestion that is probably better to as to mmorph.

The difference between matching and following is subtle on paper but has large implications to the output.

We want to mmorph A signal to sound like B signal. B signal has lots of information at 100hz, say -6dB. So when we mmorph A into B, the mmorph looks at the 100hz in B and passes through -6dB of A. This is a lot of signal, and if A where pink noise (flat across all frequencies), it would spit out a lot of signal because A would contain significant audio at 100hz. The problem is, our A signal has very little information at 100 Hz, it has a lot at 80hz and 120hz, but very very little at 100hz. So mmorph tries to pass through a lot of signal at 100hz but there isn't much there and the output doesn't actually sound like B at 100hz at all.

Ultimately the problem is that A and B have to be very similar signals. By having matching (rather than just following B and hoping there is similar information in A) the output of the morph will be a true morph. A would be FORCED to match B rather than sort of 'imprinted' with B.

Yes it could sound bad if A and B are different as it could boost noise rather than actual signal, but you could get VERY VERY interesting results at 50% (half matched) morphs. I feel it would open up a whole world for the sound design I'm doing (creature sounds for video games) but also lots of interesting music applications.

I hope all that makes sense.

Post

I still don't get it! Haha sorry.

Forget about frequency information. Let's just talk broadband. What is the difference between phatik and a level follower?

Phatik measures the input and the output and "matches" them.
But as far as I understand that is also what a level follower would do. So I am missing something.
Melda Production & United Plugins
Surface Studio = i7, 32gb, SSD.
Windows 11. Bitwig, Reaper, Live. MTotal.
Audiofuse, Adam Audio monitors + sub, iLoud MTM.
Polybrute, Summit, Pro 3, Tempest, Syntakt, AH2.
Ableton Push 2, Roli Seaboard Block.

Post

I understand what both of you are saying, but, in my case, I was really reasoning in terms of harmonics of a fundamental note and hoping to obtain something in that direction (and not reasoning in terms of frequencies or broadband).

Post Reply

Return to “MeldaProduction”