Zynaptiq Adaptiverb released

VST, AU, AAX, CLAP, etc. Plugin Virtual Effects Discussion
Post Reply New Topic
RELATED
PRODUCTS
Adaptiverb Colored Spaces for Zynaptiq Adaptiverb

Post

zynaptiq wrote:
Hmm, we don't do anything odd – we just call the general OS "open URL" command, so my initial feeling is that this is some firewall/antivirus/UAC related issue; we'll investigate!
I think you are correct.

I see it when I launch a web page from Reaktor.

I think this may be something new to Chrome or Windows because I don't remember ever seeing that before.

Code: Select all

Log Name:      Application
Source:        Application Error
Date:          2016-08-25 03:16:08 AM
Event ID:      1000
Task Category: (100)
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      morbius
Description:
Faulting application name: chrome.exe, version: 52.0.2743.116, time stamp: 0x57a12717
Faulting module name: apphelp.dll, version: 10.0.14393.0, time stamp: 0x578999e1
Exception code: 0xc0000005
Fault offset: 0x000000000003685b
Faulting process id: 0x3dc0
Faulting application start time: 0x01d1feb9b26552a5
Faulting application path: C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
Faulting module path: C:\WINDOWS\system32\apphelp.dll
Jim Hurley - experimental music
Windows 10 Pro (20H2 19042.662); i9-9900K@5.1GHz;
Cakewalk; Adam Audio A8X; Axiom 61

Post

zynaptiq wrote:
arachnaut wrote:
zynaptiq wrote:Hey guys,

quick heads-up: we've deployed ADAPTIVERB v 1.0.1 which fixes the reported crash issues and has some optimizations WRT GUI rendering CPU use (not a BIG effect though). It's of course a free update & a recommended update for all users, available using the regular DL form at http://www.zynaptiq.com/adaptiverb/adap ... downloads/

Working on v 1.0.2 now which will have a button some of you may like very much ;-)

Cheers,
Denis
In 1.0.1, when I do the Update check, I get this error:

Image
Hmm, we don't do anything odd – we just call the general OS "open URL" command, so my initial feeling is that this is some firewall/antivirus/UAC related issue; we'll investigate!
If I switch from Chrome to Firefox, it works. So it is some sort of Chrome issue.
I turned off the antivirus and disabled all extensions in Chrome and it still fails in apphelp.dll
Jim Hurley - experimental music
Windows 10 Pro (20H2 19042.662); i9-9900K@5.1GHz;
Cakewalk; Adam Audio A8X; Axiom 61

Post

I would appreciate any advice on how to achieve convincing positioning of sound sources - in particular, stage positioning of orchestral instruments - when using Adaptiverb.

The manual states the following (my emphasis):
The REVERB section serves as additional layer of diffusion placed after the RESYNTH, as a
reverb fed from a tap point *before* the SUSTAIN RESYNTH, or anything in-between. It
features several methods of generating reverb. You will note that, unlike traditional reverbs,
this one has no early reflection engine
and no elaborate controls for tailoring frequency
response or frequency dependent decay times – in ADAPTIVERB, these functions are not
needed, as early reflections would defeat the purpose of the plugin
, and the frequency
response and decay bias can be controlled in a much more efficient, better sounding and
adaptive manner using the HARMONIC CONTOUR FILTER engine.

REVERB MODEL
Selects the algorithm used. Options are:
- ALLPASS; in this mode, the reverb tail is created using allpass filters. The mode is dual-
mono for maximum decorrellation. Nearly all classic reverbs use this model. ALLPASS mode is
suited for small to medium rooms, as well as for “random hall” type longer tails.
- R-TRC (RAYTRACE); in this mode, a 3D simulation of a room is used to calculate reverb by
simulating the paths soundwaves take from a virtual sound source to a virtual listening
position. This mode is true stereo and uses two sound sources pointed directly away from the
listener
, for a sound that blends very well in “invisible reverb” scenarios.
- R-TRC HD (RAYTRACE High-Definition); this is a variation on the R-TRC algorithm that
uses two sound sources placed at a different angle to the listener
(40 degrees of separation),
and that has a slightly higher reflectivity index for the simulated room. Compared to R-TRC,
this mode “feels” slightly larger and has more “movement”.
* Does this mean that Adaptiverb only provides the 'ambience', and that Adaptiverb can (and should) be combined with other plugins that provide early reflections?

* Should these other plugins provide full ERs of realistic spaces (as VSS2 does, for example) or should these other plugins only provide a sense of distance (as TDR's Proximity does)?

* Of no other plugins are used, where will the sounds appear to come from (in the 3D room simulation)? Is Adaptiverb like a plate reverb in the sense that it provides a feeling of space but not position?

Post

For positioning/real room modelling, yes, you'll want to use something else or add a dedicated reflection device. The description "similar to a plate, giving the sense of space but no real position" is generally speaking pretty good, actually. I tend to think of it as "put space into your sounds, not your sounds into a room".

The ALLPASS reverb model has earlies included, but to get them to show, you'll obviously want to use REVERB SOURCE values below max so that some of your signal's transient energy enters the ALLPASS verb. ItÄs dual mono, BTW, for higher decorrelation. The RAY TRACE is true stereo and implicitly does have some sort of position info – but we're really not focused on the room simulation side of reverb in ADAPTIVERB.

Indeed, you can combine ADAPTIVERB's late tail with a dedicated early reflection engine, we had that approach back in RAYVERB, and it worked nicely.
Zynaptiq - Audio Software Based On Artificial Intelligence Technology, makers of PITCHMAP: Real-Time Polyphonic Pitch Correction And Mapping.

Post

zynaptiq wrote:I tend to think of it as "put space into your sounds, not your sounds into a room".
That's a crystal clear, luminous description of this wonderful plugin.

Post

Hey ladies & gents,

so I ended up writing a rather elaborate post on GS when asked how I'd set up ADAPTIVERB for mastering purposes – which it does rather well, it can add some really cool depth/glue/euphonic harmonicity highlighting if set up to do the invisible bionic sustain thing. I figured there may be some useful info in the post so reposting it here :D
This is a great technique for reconstructing spaces and ambience.

Denis - how do you typically configure AV for mastering work?
That depends on the specifics of the signal to some extent.

For "natural" music – so anything NOT featuring hyper-compressed synthetic drums etc –

I may start out with the Bionic Sustain Beauty preset (#3 in Music Production Reverbs). The core thing here is to use 100% REVERB SOURCE so you get the full transient/noise removal; this is important if you want to make sure the mix isn't "moved back".

I'll typically set the D/W to 50/50 for some headroom, and maybe readjust that later. Adjust pre delay and lowcut with attention to detail – the lowcut prevents long kick drums and sub bass notes from entering the tail, you'll rarely want to sustain those as to keep the bottom focused. The pre-delay value can make a difference in getting the "groove" right – as the SUSTAIN always "lags" a little (the oscillators keep trying to catch up, which is part of what makes it sound so organic), I'd recommend starting with a zero value.

I've found SUSTAIN and (ALLPASS) SIZE values of 0.8 to be a good spot for a lot of music, but you may want to try tweaking those to your signal just in case there's a better setting. I typically leave REVERB MIX at zero, as when you have REVERB SOURCE at max, even the minimum setting will leak a little into the reverb; this is by design. Why? Because that's where the magic happens 8)

Then, switch the HCF to TRACK - FOLLOW, and take it up just a little bit, until the reverb starts getting masked by the signal, then dial it back a percent – you want this to be right at the edge of being perceived for maximum effect, just like you'd use microshifts and Haas delays on lead vocals, maybe even slightly lower in relative levels. Maybe add 2-3% of BREATHINESS for some sizzle style detail, but careful with that, it can break the "invisibility". Finally, take the WET GAIN all the way down, then increase it until you start hearing the tail as a tail, then back off a little.

Variations on the theme can be had by using RICHNESS with a fifth interval – a little goes a long way, while overdoing it can sound crazily cool but probably not what you're after in mastering. A little bit of RANDOMIZE can also be great, but also causes movement, which may make the sustain become more obvious.

For very dense stuff or music with heavily compressed and/or synthetic drums it can be a little bit more tweaking you'll need – some of the noise in the drums may start entering the tail due to the compression "sustaining" the treble. Similarly, the HCF may "see" compressed treble as "pitch" to retain.

What i sometimes do in such cases is to increase REVERB MIX up to max, preferring the RAY TRACE setting, but with a SIZE around minimum. Then, feed some direct signal into the REVERB by slightly decreasing REVERB SOURCE. Dampen the reverb by setting DAMP higher, you can be generous with this as the damping filter sounds very nice – I sometimes end up around 50-60% of the range. This tends to suppress the leaking transients somewhat. The rest of the procedure would be similar.

If the input transients are so heavy that that doesn't help, I've had good success with using a reverb send rather than an insert, and adding UNCHIRP in front of the reverb on the send. You can use UNCHIRP to get rid of the transients and noise really well. There's a factory preset called Discard Original Transients in the General Purpose & Sound Design category. Use the DRY/WET to control the amount.

While I'm at it with the psycho-acoustic mastering / sweetening hints, let me maybe go into a little more detail on using our other stuff in that context, as I regularly use all of them in a row in this context; I'll focus on the off-label-ish techniques as the uses of UNMIX::DRUMS and UNFILTER in mastering would be kinda obvious, for example 8)

UNCHIRP can do some really cool stuff in mastering, using it's transient synthesis in frequency dependent mode. There are presets to get you started with this, for example Bark Sweetener Medium HD in the Psycho-Acoustic Processing category. It uses the transient resynthesizer in UC to selectively add transient energy at the bark band frequencies *only if they're missing*, which makes for a minimally invasive but perceptually very noticeable "refreshing" of the audio.
The reason is, simply put, that human hearing extrapolates the general impression of bandwidth and "clarity" of a signal from what it measures in the around 25 so-called "Bark Bands". The exact frequencies are different for each individual, but they're always roughly similar. By having tight transient energy in those bands simultaneously, the ear is "tricked" into thinking that the signal is generally wide-band and rich in transient energy – even if the mix itself is of a darker variety. You won't get this effect with anything else I'm aware of, and you won't hear the typical EQ ringing you'd get with narrow peaks as the energy is only added if there actually is a transient, and then it's not EQed in, but synthesized on top.
Don't overdo this effect, a little goes a long way, and too much will break stuff hittt

On a side-note, a mastering engineer buddy of mine compared this to using a Massive Passive switched on, but with the bands all set to unity gain in boost – then carefully selecting the filter frequencies in a way that "tickles" the right way, using just the ever-so-subtle effects of the passive filters. *Very* subtle, and he admittedly has a heavily modified Massive in terms of caps, PSU, cabling and tubes...ah well :wink:

Another thing about UNCHIRP that, if used subtly, can help getting a lot of clarity (well, a "lot" in mastering-lingo, obviously) is the SYNC function.
The original design intent is to reduce the transient smearing caused by linear phase FFT filtering in codecs. What it does is center its processing around a transient, then frequency dependently adjust phase until the transient "looks" like it makes sense again. It works really well for that, actually.
As a "side effect", the signal between the transients is processed in such a way that stochastic signal components (read: noise, reverb, mud, mush) are suppressed more strongly that correlated components, resulting in an overall "tightening" and "de-crap-i-fying".

If you overdo this it WILL show (and sound somewhat denoiser-ish, though wide-band-denoiser-ish), but in small amounts it can work some real magic. A good starting point is the factory preset called Init; switch it to HD mode, then adjust SYNC to taste.

Another one for the psycho-acoustic stuff is UNVEIL. While its primary purpose is reducing or boosting the amount of reverb contained in a mixed signal, it also grabs stuff that is mathematically similar to reverb – like "mud", or for example interference artifacts from mixing stuff that is slightly out-of-tune or out-of-phase (as in pretty much all real world mixes, basically). This can do some cool stuff. I show that at trade-shows sometimes – UNVEIL making an ADAT-based production loose the typical veiled/boxy feeling.

Good settings for this type of "unveiling" (hence the product name, actually) are LOCALIZE at max, refract & presence at 50% or more. Adjust transient bypass in a way that the attacks stay clear (using the SOLO is great for adjusting threshold), and ADAPTATION is kind like the release time on a compressor in this context (though it is technically in no way related to RMS or similar). Then, use positive FOCUS amounts for...well...adding focus. You have frequency dependent control of this with the 10 bias sliders, which add to / subtract from the main knob's value.
I've found focusing the effect on the low to medium mids is most effective for modern genres, and increasing the reverb amount with negative slider values for the low end can nicely enhance the "bigness" of a mix.

Probably more information than you were looking for eh :wink:
Zynaptiq - Audio Software Based On Artificial Intelligence Technology, makers of PITCHMAP: Real-Time Polyphonic Pitch Correction And Mapping.

Post

Thanks for posting this! I would like to request that you think about possibly making a few tutorial videos, starting with the basics. Simon's videos are great, and watching him play as he gets the feel of a new plugin is very instructive, but systematic study is good too, especially for those who aren't quite as, ummm, advanced as Simon. Anyway, just my .02 worth... Thanks!

Post

did you watch


https://www.youtube.com/watch?v=tJ662nMcLXI


Not an in-depth tutorial but a quite good overview that complements Simon's work.
Windows 7, Cubase 9.5 and some extra plug-ins | Takamine EN-10C and PRS Mira

Post

Thanks!!

Post

ErikH wrote:did you watch


https://www.youtube.com/watch?v=tJ662nMcLXI


Not an in-depth tutorial but a quite good overview that complements Simon's work.
Already posted 5 pages back.

Post

New, freshly baked video just in. It's a mix between showcasing sound examples and explaining to some extent how they are made – Sound Design with ADAPTIVERB: Drones, Cross-Filtering, Freeze and Pitch Quantization.

https://www.youtube.com/watch?v=zF11J2Ip8GE
Zynaptiq - Audio Software Based On Artificial Intelligence Technology, makers of PITCHMAP: Real-Time Polyphonic Pitch Correction And Mapping.

Post

Ahhhh... This new video is EXACTLY the kind of resource I'd been looking/hoping for! Thanks so very much!!

Post

Wonderful what Adaptiverb does to the orchestral chord at 7:10

Post

zynaptiq wrote:
TabSel wrote:Will the HCF Keyboard tracking be playable by MIDI, so that you could actually "play" the reverb tail?
You know, there's this other product called PITCHMAP that lets you enforce MIDI-played pitch onto an audio stream... :D
Petri Alanko did that a whole lot for his Quantum Break soundtrack IIRC.

On a more serious note: we thought about that, but feel that the CPU load is already pretty much at the max of the scale, as is the number of functions (much more and it will start eating my brain in addition to my CPU) – though I do admit that would be very cool so who knows.

What I feel would be even more useful though is a sidechain input that you can use as HCF tracking source – so that the ADAPTIVERB wet path would conform to the sidechain's tonality rather than the input's.

More snapshots would probably not be a problem though. Maybe 8?
Denis, I have to come with this topic again: it would clearly be so much more practical to have the HCF Keyboard Tracking playable via MIDI.

Application: I use Adaptiverb to sonically enhance synth sounds, played via MIDI of course. I want to adapt the HCF to sound I play, so I'd like to be able to feed the synth with a MIDI chord AND simultaneously send these MIDI notes to AV, which sustains the synth sound and HCF it to the chord I play.

Any chance? Really, I mean, there'd be no additional CPU cycles wasted, and it would not be more complicated for the user, it would be "simple additional radio button option "Live" in addition to the "A", "B",... Snapshots radio buttons.
When engaged, the Keys are determined by Plugin MIDI input, instead of Snapshot Key Settings...

Please?

Nevertheless, a Sidechain input for HCF would still be a cool addition ;)

Post

How long does the demo version last for?

Post Reply

Return to “Effects”