Web Audio Modules (WAMs)

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Doesn't seem like I can get MIDI working with Vivaldi (Chromium-based), or am I missing some settings? Mouse clicks on the onscreen keyboard work, MIDI permissions are granted to the site

Post

MIDI seems to work ok in Vivaldi 1.13.1008.44 (Mac OSX 10.12.4). I hadn't tried Vivaldi before but it does feel pretty performant.

Post

jariseon wrote:Hi!

Would be great to hear your feedback and suggestions!
Very cool!

Suggestion:

Both Windows VST3 and Mac Audio-Unit-Extensions formats support sample-accurate parameter automation.
The benefits of this include - consistent rendering of tracks regardless of different audio buffer sizes.

I would suggest that you update the API to include a timestamp on Parameter changes. You already include a timestamp for MIDI, and my experience is that once one's code has to support sample-accurate MIDI, the addition of sample-accurate parameter support is trivial to implement.

For a good explanation (with diagrams) of the frustrating user-experience of non-sample-accurate automation, see this post at presonus.com

https://forums.presonus.com/viewtopic.php?f=213&t=22254

Post

thanks for the feedback jeff! it was good to meet you at ADC. Could synthedit make WAMs one day?

Post

hibrasil wrote:thanks for the feedback jeff! it was good to meet you at ADC. Could synthedit make WAMs one day?
Good to meet you too!

The main hurdle when porting to a new platform is not the DSP, it's always the GUI. I'm pretty much stuck with C++ based "immediate-mode" GUI code at present, so I'm not sure how easily that would translate to the web. (it's hard keeping up with technology).

Post

duplicate post. ignore.

Post

Jeff McClintock wrote: The main hurdle when porting to a new platform is not the DSP, it's always the GUI. I'm pretty much stuck with C++ based "immediate-mode" GUI code at present, so I'm not sure how easily that would translate to the web. (it's hard keeping up with technology).
Hi Jeff,

I agree. So far my experience with WAM synths has been that 20% of the porting time goes to DSP, and the remaining 80% is spent porting the GUI. Sometimes DSP ports in even less time.

Knobs, sliders and buttons do port pretty easily, but it is laborious to transform C++ to JS manually, or to pick absolute coordinates from imperative code and turn them into declarative parameters. A declarative GUI description format that renders either in bitmap or vector form would reduce GUI porting time. But even with that, there will often be custom GUI code that needs to be ported manually. It can be done though.

Post

Jeff McClintock wrote: Suggestion:

Both Windows VST3 and Mac Audio-Unit-Extensions formats support sample-accurate parameter automation.
The benefits of this include - consistent rendering of tracks regardless of different audio buffer sizes.

I would suggest that you update the API to include a timestamp on Parameter changes. You already include a timestamp for MIDI, and my experience is that once one's code has to support sample-accurate MIDI, the addition of sample-accurate parameter support is trivial to implement.
Thanks for the suggestion!

Right now WAMs make a single call to process() method, which renders the entire buffer. What's your opinion, should it remain like that (timestamped parameters would then be passend in an array argument), or should process() be split to multiple calls at parameter timestamps (parameters would then be set before calling process to render each buffer slice) ?

On the other hand, Web Audio API supports interpolated audiorate AudioParams. The issue with those is that a synth can easily have 100 params, and it would be inefficient to handle all of them as AudioParams when only a handful (if any) are automated. Perhaps there could be a fixed set of AudioParams passed to process() method. Say, five AudioParam buffers without fixed destinations. The user can then bind destinations at host side.

Post

jariseon wrote: Right now WAMs make a single call to process() method, which renders the entire buffer. What's your opinion, should it remain like that (timestamped parameters would then be passend in an array argument), or should process() be split to multiple calls at parameter timestamps (parameters would then be set before calling process to render each buffer slice) ?
I think in general an API that renders an entire buffer at a time with a time-stamped list of events is a better choice, simply because it's fairly trivial for a synth or effect to split buffers internally, where as merging smaller buffers back into a larger one is essentially impossible.

Post

jariseon wrote:Right now WAMs make a single call to process() method, which renders the entire buffer. What's your opinion, should it remain like that (timestamped parameters would then be passend in an array argument), or should process() be split to multiple calls at parameter timestamps (parameters would then be set before calling process to render each buffer slice) ?
What I do is pass events in an array (MIDI + Parameter changes). The plugin then has the option to split the processing into multiple "sub-blocks". The programmer can still choose not to split the buffer by ignoring the parameter timestamps and updating them once-per-bloc. For example if some parameters are not time-critical, or if performance is tantamount. It's nice to let the plugin decide the strategy rather than forcing buffer-splitting when it's not wanted.

GMPI REquirement #24: GMPI must implement a time-stamped, sample-accurate event system. Events are how realtime signals, including control changes, are represented.
Last edited by Jeff McClintock on Mon Feb 19, 2018 4:14 am, edited 1 time in total.

Post

jariseon wrote:On the other hand, Web Audio API supports interpolated audiorate AudioParams. The issue with those is that a synth can easily have 100 params, and it would be inefficient to handle all of them as AudioParams when only a handful (if any) are automated. Perhaps there could be a fixed set of AudioParams passed to process() method. Say, five AudioParam buffers without fixed destinations. The user can then bind destinations at host side.
In my opinion, it's better to avoid hard limits on such things. I have plugins with many audio-rate parameters. Often every parameter is audio rate.
This is possible through the use of "silence flags" or similar. The idea is an input can be marked as "silent" to indicate it is not changing right now. This can greatly improve efficiency because the plugin does not need to read or perform calculations on that buffer repeatedly. Note "silence" can also mean "non-zero but steady". Hope that makes sense.

GMPI Requirement #46 : GMPI must provide a performance optimization mechanism for handling silent audio streams. This mechanism must allow plugins and hosts which are built to use this optimization to flag silent outputs and detect silent inputs without examining the buffer contents.

Post


Post

Some new WAMs to try, in desktop chrome:

http://virtualcz.io

https://wam.fm/augur-synth/

Post Reply

Return to “DSP and Plugin Development”