Dawesome MYTH

VST, AU, AAX, CLAP, etc. Plugin Virtual Instruments Discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

Alexander137 wrote: Wed May 08, 2024 2:16 pm Does it have a randomness modulator?
Yes, there's a random sub-module available in the Math module and the LFOs.

Post

In a way. Add a modulator, click on it if it isn't already displaying in the bottom section. In the bottom section click on the '+' then you can select one of perlin, jit or note-random

Post

jobinho wrote: Sat May 04, 2024 2:57 pm
Borbolactic wrote: Sat May 04, 2024 2:07 am
databroth wrote: Sat May 04, 2024 2:17 am MSEGs aren't granular though, those are two different things, like adsr and FM
I realize that, but they are apparently used for granular.
It's like saying filters are used for granular so a completely irrelevant statement.
What I seem to recall meaning is that MSEGs may be used in different ways and contexts, yes? So if you have 500 of the same 'filter/effect' used on or for a sound, that seems to infer something about how that sound is coming about.

In any case, whatever is happening with Myth under the hood, if understood correctly, one can't simply go under its hood and peruse its code anyway, given its proprietary/closed-source nature. So, we seem to have to take the devs' word for it. I'll take it with a grain of salt.

BTW, apparently, granular synthesis isn't synthesis.
Last edited by Borbolactic on Thu May 16, 2024 4:32 pm, edited 2 times in total.

Post

Borbolactic wrote: Fri May 03, 2024 3:31 am
databroth wrote: Tue Apr 30, 2024 3:20 am
"I know Vital also has an external free resynthesis-- or maybe more accurately called inference?-- plugin too." ~ Borbolactic
"I've seen this, it is more akin to what synplant is doing, in that it uses the engine its self, rather than an array of oscillators. However, from everything I've seen, it is incredibly mediocre, I am sorry to the creator of this tool, but it seems to do poorly at replicating even the most basic sounds. Stuff that you shouldn't even need resynthesis for to begin with." ~ Databroth
The app is here and unless their dates were last year's, what you may have tried was its beta. Its full release is apparently slated for this June.

In any case, WRT the notion of an AI app leveraging only a synth's 'oscillators' versus using a synth 'holistically', it would seem that leveraging a synth systemically or holistically makes sense, maybe even more sense in some ways, than using just a particular facet of it, even if a main facet.

Also, it would seem that if an AI app leverages a system holistically/systemically, it might be advantageous to the user in learning how an AI gets a synth to produce a particular sound, and since we're not always or normally just using the oscillators or whatever for sound-design.

This segues nicely into the fact that Vital is open source-- at least particular versions and/or at least WRT its fork, Vitalium. Presumably, when/if the code is open, people and AI's can get under its hood, at least more readily, quickly and/or cost-effectively.

Lastly, what with AI improving and ChatGPT-4o just having arrived, it may be that some of the work as we know it in designing sounds may change if it already isn't beginning to, making some or much of this kind of discussion moot. We might even create our own 'synths' and/or work with AI as a synth of a sort...

So maybe for some, less or no more tweaking buttons and dials along the lines of, for example, what you, yourself, have been doing on your You Tube channel.

That writ, here's hoping FreeLibreOpenSourceSoftware AI closes the gap against blackbox proprietary and becomes the standard in the interest of truth, access, knowledge, equity, freedom and evolution.
"The most dangerous man, to any government, is the man who is able to think things out for himself... Almost inevitably he comes to the conclusion that the government he lives under is dishonest, insane and intolerable..." ~ H.L. Mencken

Post

Myth & Audio Files

----

"I saw one question on 'spectral processing' and what it brings to the table.
There are a classical signal processing techniques that simply work on the sampled, digital audio. This is called 'time domain' audio processing. Typical stuff includes filtering, analog modelling, compression, delays ... all that stuff.
Different from this is 'spectral': here the audio signal is divided into (many) frequency bands, typically 2048 or more. So like a very detailed EQ ... and then the processing takes place on the individual audio bands. This allows for different ways to process the signal...
'Spectral Modelling' is when you model sound sources with this kind of technology. There are tons of different models, all with their own pros and cons.
Abyss uses models that excel in capturing and modifying natural fluctuations in sound. It achieves sound that you normally would expect from sample libraries, but it gives you much more flexibility in tone shaping if you compare it with a normal sample bases engine...

So the position slider is like a playhead that can be modulated back and forth, playing the tone colors that result from the spectral models right? Is that some sort of amplitude crossfading, like a 'DJ mix crossfade' or volume mixing? At that stage there’s no spectral crossfading going on I gather.

Question about the spectral models. If one adjusts the detail level to the bare minimum, would that leave a 'basic' waveform - like a saw, sine, triangle etc?

Another question - when one picks a timbre (based on a spectral model) it will then 'render' a set of waveforms, much like multi-sampled material? So spread out over the keyrange, the spectral model will result in different timbres - like a multi-sample? Or is it more like a waveform rendered by a traditional oscillator, so 'a single shot' waveform that gets spread out over the keyrange?

Edit: reading your post above, the models are ACTUALLY rendering WAVs (or the equivalent) that are then stored in RAM and processed further in the signal flow? So I should take 'multisamples' quite literally?

'Yes: detail to the minimum will leave something like a 'natural' sin wave. Let me explain the 'natural' here: normally sin waves are generated by a simple math equation, and this creates a very static sound. When you play eg a bass flute in the low octave the sound is also very close to a sin wave, but of course - there are some fluctuations in the sound. This is what also happens in Abyss - these kind of fluctuations remain. (for mathematicians: its pseudo-periodicity instead of periodic)
You can emphasize these fluctuation with higher values of ORGANIC, or by increasing DIRT (which can be modulated and automated by the way). Second questions: yes, the model render an equivalent of WAVs that are stored in RAM and processed further in the signal flow. Think of it as a sample library where you can generate your own samples.'

Because of that, is there any hope that one day this could be handled by GPUs instead of CPUs, as I wondered myself many times playing with other CPU-intensive techs such as granular synthesis?" ~ Tatiana Gordeeva

----

"I should have realized this a while ago, but up until the release I was expecting Myth to be more like this idea that Dawesome described here:

Peter V said:
'Novum is the starting point to offer an alternative - with the idea to turn _every_ sample into an instrument. So rather then going down the road of multi-samples I'd love to create a solution where you take for example one voice sample, and have it playing naturally across a wide keyboard range, without the mickey mouse effect (unless you desire it).

The current version of Novum de-couples speed from pitch and it decouples temporal evolution from timbre. I just started doing research to de-couple harmonics from formants. This requires automatic detection of formants from a single source sample, which is an unsolved problem, but I think can be addressed with machine learning / AI. And it requires a realtime sampling technology that can pitch the sample while keeping its formants in place. Don't expect this to arrive next week or next month. But expect it to arrive in the future.' ~ Peter V

The Iris resynthesis in Myth creates something more like noise that has time-varying harmonic and phase information from the sample. (I'm not sure about some of the other resynthesis options---for example the Modal filter will analyze a sample in quasi-physical modeling terms.) So that was a bit disappointing for me."

----

"...the engine is not based on classic oscillators with a bunch of static waves. Instead, the synth uses two resynthesized audio files visualized as an iris...

Once you’ve done this, the engine transforms the files into a plethora of sonic variations. From here, you can then explore the oscillators with the transformer dials."

~ https://synthanatomy.com/2024/04/daweso ... urney.html
Borbolactic wrote: ↑Thu May 02, 2024 11:31 pm

"What are 'bin arrays' and how are MSEGS being used in the context of sounds for Myth? Are MSEGS not used (more) in sequencing contexts? In any case, in both those contexts, it would seem the sounds are being granularized. Or, perhaps the sound is being played almost per-partial, but not quite-- more like a wavetable (AKA bin array?)..."

"bins are evenly spaced frequency bands..." ~ Databroth
Last edited by Borbolactic on Thu May 16, 2024 4:27 pm, edited 1 time in total.

Post

So maybe Myth's secretly a glorified sampler?

Vital is secretly an additive synthesizer

Post

"Important disclosure: I do now work for Dawesome " ~ DataBroth
Last edited by Borbolactic on Thu May 16, 2024 6:11 pm, edited 1 time in total.

Post

Jac459 wrote: Sat May 04, 2024 6:42 am "Then the IRIS allows to read a sample at speed... 0 because it is breaking it down to granular items.
So now, it isn't a new process per say, Grain from Reason Studios does it as well, and Novum from the same maker also.
BUT the way it is done here is with a new workflow making it interesting."
Thanks Jac459.

Post

Borbolactic wrote: Thu May 16, 2024 1:38 pm So maybe Myth's secretly a glorified sampler?

Vital is secretly an additive synthesizer
I'm sorry, but it's not, it's not a sampler at all
I explained it to you already, and I'm not sure how the video you linked implies that myth is a sampler in any way what so ever

I work for dawesome, as you quoted, I've talked with Peter about the sound engine
I don't think he'd lie to me
what do we even have to gain from lying about this anyways??

I read more of your posts, I don't know why you are insistent on Myth being a sampler now?
it simply isn't, it's not a sampler, you seem to be equating any sound generation to "sampler"
I feel like you have a preconceived notion about what you want Myth to be, and you're going out of your way to justify this. Do you think I am lying to you? Do you believe me or Dawesome to be mistaken about the tool he created?

there is a 90 day demo, you can try it for yourself at any point
I do suggest demoing it and seeing for yourself

whatever I say though doesn't seem to matter, you get to chose to believe whatever it is you chose to believe. You can live in a world where people like me lie about sound engines if you want, or you can live in the world where you have a better understanding of a synthesizer because I took the time to explain how it works to you. That's the beauty of the world, you get to decide, I can't make that choice for you

I think we should all live in the world where we make the best music we can with whatever tools we chose to use, be it Myth or anything else. If you feel like you can make a plugin with chat gpt, go right ahead and do so, nothing is stopping you.

for the record, vital isn't actually an additive synthesizer, that title is sort of a joke, I didn't mean it literally, it is a narrative device which I chose to frame a tutorial around. The idea is that you can achieve sounds that are similar to additive synthesis with a non-additive synthesizer.
Check out my website for synth/software articles reviews and presets http://databroth.com (new review every monday)

I also do experimental sound design and demos of plugins (no talking) on my youtube: https://www.youtube.com/databroth

Post

What in the white hot glowing f**k is happening up in this thread?
Don't F**K with Mr. Zero.

Post

Ah_Dziz wrote: Thu May 16, 2024 10:06 pm What in the white hot glowing f**k is happening up in this thread?
I'm not sure, but I am attempting to react to it as professionaly as possible
I don't mean to argue with strangers

I am only trying to provide information for those curious about Myth
and I don't want people to be mislead into believing myth is a sampler
it is actually very important that we understand myth is NOT a sampler, nor is it trying to be a sampler

if this is communicated well enough for everyone else, I am happy
if one person needs to believe myth is a sampler rather than demoing it or trying it for themselves,
I guess I'll just have to accept that

I do please ask that we all remain positive and I hope we can continue interesting conversation about Myth, synthesis and sound design in general

Myth has dozens of modules to transform a sound after the Iris, loads of modulation too
it's a deep synth worthy of exploration, I've been hard at work making a pack of keys/plucks for Myth to demonstrate that it is capable of much more than just pads and ambience

leads/bass will be next
Check out my website for synth/software articles reviews and presets http://databroth.com (new review every monday)

I also do experimental sound design and demos of plugins (no talking) on my youtube: https://www.youtube.com/databroth

Post

You're good Mr. Broth. I just read the past few pages.... And was confused.
Don't F**K with Mr. Zero.

Post

databroth wrote: Thu May 16, 2024 10:12 pm
Ah_Dziz wrote: Thu May 16, 2024 10:06 pm What in the white hot glowing f**k is happening up in this thread?
I'm not sure, but I am attempting to react to it as professionaly as possible
I don't mean to argue with strangers

I am only trying to provide information for those curious about Myth
and I don't want people to be mislead into believing myth is a sampler
it is actually very important that we understand myth is NOT a sampler, nor is it trying to be a sampler

if this is communicated well enough for everyone else, I am happy
if one person needs to believe myth is a sampler rather than demoing it or trying it for themselves,
I guess I'll just have to accept that

I do please ask that we all remain positive and I hope we can continue interesting conversation about Myth, synthesis and sound design in general

Myth has dozens of modules to transform a sound after the Iris, loads of modulation too
it's a deep synth worthy of exploration, I've been hard at work making a pack of keys/plucks for Myth to demonstrate that it is capable of much more than just pads and ambience

leads/bass will be next
And doubling down on what you are saying, the advancement of spectral processing and granular synthesis is what is interesting me the most in modern synths, even before gen AI.

Post

I'm still having a great time with Myth.

It really straddles that organic/synthetic world better than any synth I've used.

It's also extremely easy and quick to use once you get familiar with it.

I find the sweet spots are getting a lot bigger with familiarity. There's a lot to take in.

It has an enormous range.

I just find it really inspiring

It's my favourite synth by Dawesome by far.

Post

Ah_Dziz wrote: Thu May 16, 2024 10:06 pm What in the white hot glowing f**k is happening up in this thread?
This thread got 4Chan'ed by Kvr. :borg:

Much props to Databroth for still trying to keep it relevant.
The art of knowing is knowing what to ignore.

Post Reply

Return to “Instruments”