Simple questions on synth tech basics...

DSP, Plugin and Host development discussion.
Locked New Topic
RELATED
PRODUCTS

Post

I am using 96 all the time. I don't hear any such stepping with Sylenth when bending the pitch (I suppose that is what you mean) at the highest frequencies I can still hear. Maybe my hearing is too bad already :P

I have observed the attack phenomenon on all synths I have used so far. And it applies on any octaves. But the difference is stronger on low octaves.
Maybe it is simply the higher frequencies (which it sounds are increased when increasing the env amount) that make a sound muddier, I don't think a kick drum would sound punchy on the 7th octave :hihi:

Post

Yeah, that can have a major effect too. For example you might be getting the "click" of the attack on the low frequency filtered content of the filter.

For example if you use cutoff = 500hz, it means the 12db or 24db slope (or whatever) is applied to the input signal. The envelope attack is either linear or log, which has pretty much a 6db slope. (Not for linear, but we'll just go with this. Linear = 1/n^2.)

So you get the contrast between the two slopes with the cutoff low.

For example if you have a square wave you can use attack = minimal and you won't hear any clicking when you play a note. Of course you can, but it isn't noticeable really due to the edges of the square being equal. There is no such contrast between low frequency and high frequency transients.

If you do the same with a sine wave however the content of the sine will be limited to only the frequency of the note you play, while the attack transient will introduce the full spectrum "click" filtered at the attack frequency.

Likewise, if you play a square through a low-pass filter it becomes more and more like a sine as you lower the cutoff. The contrast increases. If you raise the cutoff, eventually the contrast will be zero just like with the unfiltered square.

Of course I'm talking about a "click", but this doesn't apply just to very narrow/fast transients, it applies to any transient. The same effect occurs with a fast decay (100ms or so) as you move the cutoff. The highest cutoff will have minimal effect, but as the content of the sound becomes filtered relative to the slope of the decay, the transient becomes more audible due to the contrast of the frequency content in those elements making up the signal.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Was comparing two synths and noticed how much wider one of them sounds. Is there a limit to the stereo width illusion one create? I mean, many synths have width controls these days, but setting them to max results in a different width. So it can't be just the angle, which I imagine is 180° in extreme cases, hard left and right of the head. Do synths use a kind of reverb to create that illusion?

I also think that one synth sounds "higher" (as in room height) than the other. That seems even more fascinating, vertical stereo so to speak :D No idea how that works...

Post

fluffy_little_something wrote:I also think that one synth sounds "higher" (as in room height) than the other. That seems even more fascinating, vertical stereo so to speak :D No idea how that works...
Image
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Dunno much acoustics or whatever. It would maybe be a long google to find substantiation of this memory--

When reading about speaker building or speaker/room treatment or whatever-- Think I recall reading that driver physical placement in a speaker cabinet, or speaker electronic/crossover time alignment oddities-- And some room treatment or speaker placement conditions-- Can cause "vertical displacement" auditory illusions.

An illusion causing some musical parts to seem "vertically stacked", some sounds seeming to emanate from above or below the speaker.

After getting my office treatment about as good as I could get it, a couple years ago, did a lot of listening to calibrate the ears to the room. A few old records seemed to have a "vertical spread" illusion but most recordings seemed to emanate from the speaker.

Two I recall which had the "vertical spread" illusion on my speakers in my room-- A "best of the beach boys" CD, remastered who knows how many times or whatever. Maybe the remastering encouraged the illusion. The stacked vocal harmonies and instruments, listening with eyes closed, some voices seemed to come from higher-elevation point sources, and other voices/instruments coming from lower-elevation point sources.

Another was a couple of Chick Corea "mid-period" 1980's Return to Forever albums. Which were excellently played and recorded. The guitar, drums, various keys all seemed vertical-stacked as well as having a stereo spread.

At the time was wondering if it was something encoded in the recordings, or something I did wrong acoustically treating the room and placing the speakers. Because I had read that acoustic or speaker placement problems can cause such illusions.

Because only a few recordings have this illusion in my room, maybe it is more a fault or feature in the recordings, rather than screwed-up room treatment. Or maybe some recordings make it more likely to hear a "vertical spread" illusion.

Some 3D processors were claimed capable of processing audio to give vertical direction cues on stereo speakers, dunno anything about it.

I don't care about watching TV or movies, but wife watches TV a lot while I'm browsing the computer. It is just a cheap medium sized LCD TV with built-in small stereo speakers. Usually sound seems to emanate from the direction of the TV, but occasionally a TV show sound effect will seem to come from way over on the other side of the room. I supposed that it is caused by surround processing that decodes with a phase/delay which fools the ear into hearing the apparent direction drastically far away from the actual speakers.

A decade or so ago, Roland was doing a big push on some stereo-speaker surround processor advertised to be able to place sounds far to the left/right of the speakers, even behind the listener or at vertical angles. I never checked it out. Some people's ears were very sensitive to the processing illusion, hearing strong directional cues. Some people didn't hear much effect at all. And such processing was apparently sensitive to listener and speaker location. I'm not so concerned with stereo imaging, more interested in the timbre and musical structures. Never paid much attention to it.

So maybe there are psycho acoustic processors that can add quite a bit of "dimension" to music, at least for listeners with the "right kind of ears" in the "right kind of environment"?

Just sayin, maybe some of your synths mangle the sound in ways that exaggerate directional cues? It might not even be intentional stereo processing, just happy accidents in the way the synth and FX are designed?

Post

Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Interesting, thanks for the time you took to write that long reply.
Am listening to that 12-minute Chick Corea title track at the moment. Strikingly he has panned almost all instruments to either side, the drums are on the right side for instance, rather than in the middle. The electric piano is on the left, only the spacey singers and the bass seem in the middle. The bass is very dry and seems very close and literally deep down.
Actually, about 8 minutes into the tune it is a bit stressful on my mind to listen to that on the headphones, due to the panning.

Post

Hard pans aren't great in headphones. Although you could cook up a basic crosstalk simulation, in your DAW with a basic delay plugin (MSED and Sound Delay can do it)

Post

fluffy_little_something wrote:Was comparing two synths and noticed how much wider one of them sounds. Is there a limit to the stereo width illusion one create? I mean, many synths have width controls these days, but setting them to max results in a different width. So it can't be just the angle, which I imagine is 180° in extreme cases, hard left and right of the head. Do synths use a kind of reverb to create that illusion?
Well, generally you get "width" when you put different stuff on the two channels, either multiple voices detuned or different phases on different channels or such, but the actual perceived width can depend on various subtle things that are not always very easy to predict.

That said, when you have something like unison, there's several ways to pan the voices that result in different perception. One approach is to put half of them to the left and half to the right and this will typically give you the "maximum width" (although you can also go "wider" by "overpanning" which sometimes works and sometimes doesn't), but if you want the sound to be perceived more as a "whole" or maybe "filling the stereo" (ie. not just wide, but everywhere) then panning all the voices differently around the stereo will give you something closer to that (and it might sound less wide in 1-to-1 comparison with a hard-panned version, but keep width better when you put it into a mix... or it might not). Oh and sometimes if you want a more coherent "one sound" you might want to weight the stuff in the middle higher gain than the stuff on the sides, which will typically reduce width, although the result might tolerate over-panning better then .. so .. yeah.. it's not entirely straight-forward.

Also if you're using unison detune to create the illusion of width, then the actual detuning rules used can also affect the final perception. Similarly if you're just using different phases on different channels, then the details of how the phase varies can make a lot of difference (but as far as I know is not very easy to predict in general).

Anyway, the short answer is that unless the two synths use the exact same rules everywhere, simply setting the width to maximum on both is unlikely to make them match exactly.

If you throw reverb into the mix, things get even more complicated, what I said above basically applies to "dry" sounds, but how any given synth interacts with any given reverb can also vary quite a bit.
I also think that one synth sounds "higher" (as in room height) than the other. That seems even more fascinating, vertical stereo so to speak :D No idea how that works...
This likely has to do with the frequency response. The ears interpret up/down mostly based on the spectral shape, play around with EQ (eg. relatively narrow notches or something) in the upper-mids/treble region and you can probably observe some movement (though getting the details right accidentally is a bit tricky.. in any case, if you have two synths with slightly different frequency response, they will probably position a little differently).

Post

camsr wrote:Hard pans aren't great in headphones. Although you could cook up a basic crosstalk simulation, in your DAW with a basic delay plugin (MSED and Sound Delay can do it)
While I generally agree that hard pans aren't too great, I actually think almost any simple "cross-talk simulation" tends to make it all sound a lot worse. It's a nice idea and it sounds cool when you try it first time on a hard-panned mix, but really it doesn't work with anything else and doesn't really sound good in the long run. I think a much better strategy for making things "headphone friendly" is to make sure there is a little bit of stereo reverb on the mix to give the sound some spatial context and possibly some slap-back echoes on the other channel (on per track basis) if you have some very strongly panned sounds.

YMMV, this is just my thoughts on using headphones a lot.

Also, I don't think hard-panning is a good strategy to "maximise width" on speakers either, it tends to collapse the sounds to one of the speakers and if do something that keeps that from happening the result usually sounds quite fine on head-phones too.

Post

mystran wrote:
camsr wrote:Hard pans aren't great in headphones. Although you could cook up a basic crosstalk simulation, in your DAW with a basic delay plugin (MSED and Sound Delay can do it)
While I generally agree that hard pans aren't too great, I actually think almost any simple "cross-talk simulation" tends to make it all sound a lot worse. It's a nice idea and it sounds cool when you try it first time on a hard-panned mix, but really it doesn't work with anything else and doesn't really sound good in the long run. I think a much better strategy for making things "headphone friendly" is to make sure there is a little bit of stereo reverb on the mix to give the sound some spatial context and possibly some slap-back echoes on the other channel (on per track basis) if you have some very strongly panned sounds.

YMMV, this is just my thoughts on using headphones a lot.

Also, I don't think hard-panning is a good strategy to "maximise width" on speakers either, it tends to collapse the sounds to one of the speakers and if do something that keeps that from happening the result usually sounds quite fine on head-phones too.
It sounds fake, yes, but almost anything else (speaker emulation) sounds just as fake, just to a different degree. A key part in enjoying headphone listening IMO is relativly equal power on both cans, in the same bands. To maintain stereo, careful use of delay. I hate nothing more than taking off my headphones and one ear is more fatigued than the other :roll:

Post

As I was using graphical envelopes, I was wondering why at least filter and amp envelopes are not implemented in the same display, using two colors. This way one would see the times of the corresponding phases relative to each other and one could optionally even link individual phases so that when increasing, say, the filter attack by 3 ms, the amp attack is also increased by 3 ms, regardless what the offset between the two is,
Another advantage would be that when putting all envelopes in one display, there is more space on the GUI and the envelope display can be bigger, i.e. easier to use.

One could do the same thing with conventional slider envelopes using double sliders, filter slider head on the left and amp slider head on the right side of each envelope control, also with optional linkage and offset.

Just an idea :)

Post

Fire up synthedit/reaktor/whatever and do it.

In the Xhip alpha for example you can do this using the MIDI routing functionality by assigning the same controller input to multiple destination parameters. What you'd really need though is some ability to create a more complex function to compute the parameter value based upon your input. This is also possible using my expression evaluator although not yet ready to be put into use.

The next best thing is to fire up something like synthedit or so on and wire things the way you want. From my own perspective writing an expression like "attack = decay * x + y" is much easier, but you can just as well use a modular to achieve the same thing with a graphic UI.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Regarding the panning, just for fun I just put three instances of a stereo widening plugin in series after a synth and set each to 200%. Oddly, the pad sound did not really become any wider when I turned on the 2nd and 3rd instance, but sounded increasingly harsh. I knew from Sylenth that too much stereo width can make sounds "fall apart", but the harshness surprised me :P

Post

aciddose wrote:Fire up synthedit/reaktor/whatever and do it.

In the Xhip alpha for example you can do this using the MIDI routing functionality by assigning the same controller input to multiple destination parameters. What you'd really need though is some ability to create a more complex function to compute the parameter value based upon your input. This is also possible using my expression evaluator although not yet ready to be put into use.

The next best thing is to fire up something like synthedit or so on and wire things the way you want. From my own perspective writing an expression like "attack = decay * x + y" is much easier, but you can just as well use a modular to achieve the same thing with a graphic UI.
I don't have SE, I vaguely remember the demo, I was too lazy to really get into it :oops: Maybe I would have if it had not crashed repeatedly and no files had been missing. It seemed immature to me...

Locked

Return to “DSP and Plugin Development”