my two cents on the origin story

Official support for: rogerlinndesign.com
RELATED
PRODUCTS

Post

When introducing the Linnstrument, Roger emphasizes a contrast with the far-too-discrete modes of electronic music playing, where the input device consists mainly of on-off switches plus a little extra. He says that this has led to a decline of solo performances via electronic means, as the sound production is largely automated and doesn't bring out human talent as much as traditional instruments do.

While this provides a good contrast with MPE devices, it seems unfair to not mention the early period of electronic instruments. It seems that first application of electronic sound production was to produce novel modes of musical expression, via monophonic instruments like the Theremin and Ondes Martenot. While these suffer the limitation of being monophonic, they offer expressive capabilities that I think are unique in all of music. It seems from my experience that even the modern MPE controllers cannot emulate the unique expressive motifs of the early electronic instruments. This is not to say that one is superior to the other, but that each has its own speciality.

So if I were writing the introductory narrative about why the Linnstrument is special, I would emphasize that the modern technology allows a combination of expression and polyphony that was not possible before. And the design of the input device, to allow human hands to take the most advantage of these technological powers, is key.

What do you think?

Post

Hi mbsq,

Thank you for your thoughts. For those not familiar with my comments that mbsq is referencing, it's this essay, accessed from the LinnStrument product page:

http://www.rogerlinndesign.com/why-expression.html

Mbsq, you make a valid point in that there have been many significant steps in the evolution of expressive touch control. The Akai EWI and EVI are other significant examples and favorites of mine. That said, my essay is not an origin story of expressive touch control nor a comment on the ills of automated sound production. It's merely a statement about the advantages of expressive touch control over the on/off switches of a MIDI keyboard, and my view that on/off switches have contributed to electronic music being used largely for background music.

I think this is a fascinating subject and if you'd care to tell your view of the origin story of electronic instruments, I'm sure others would value it. And I welcome others' comments on the topic in this thread. To kick it off, here's a thought-provoking question:

Are envelopes generators and LFOs--as well as static sound sources like oscillator waveforms and samples--still relevant in the era of expressive control, or they merely obsolete elements of the era of on/off switches?

Post

I am no expert on this stuff, but let me tell you a bit of what I know, focusing on the Ondes Martenot.

The Ondes Martenot was developed around the same time as the Theremin, but independently. The first version was completed in 1928. Martenot was a cellist and a radio operator for France during the first world war. He discovered the possibility of electronic synthesis in working with radios. He developed a mechanical way of precisely controlling the pitch and volume of audio-range waves, in a format that somehow married a piano with a cello. As the control of both parameters was continuous, expressive possibilities abounded. With electronics, he could produce new types of tones, that could be purer or harsher than those produced by classical instruments. Nowadays these tones are the backbone of analog synthesis-- sine, pulse, triangle, etc. He also used filters and special amplifiers.

The Ondes Martenot's volume is controlled by a "lozenge" pressed with the left hand, and the pitch is controlled with a system of wire and pulleys and a ring on the right hand, with a reference keyboard. The later models added playable keys with lateral movement so that natural vibrato could be produced while playing the keys. The combination of range, accuracy, subtle control over pitch and volume, infinite sustain, and beautiful tones make it a unique instrument.

Now this is only two dimensions of expression, since the selection of timbre was mostly discrete and not changed much in the middle of playing. But still, this can be quite expressive. The Ondes is used in many contemporary classical compositions, especially by Oliver Messiaen, and is taught in some conservatories. It gained more attention in recent times when Jonny Greenwood of Radiohead began to use one extensively.

So what happened between the late 20s and the Minimoog in 1970? There was a lot of technical research into the electronic production of sound and recording techniques, but from what I gather, not much focus on new kinds of instruments like the Theremin and Ondes--at least not much that really caught on. As electronics became more compact and affordable, the marriage of electronic sounds with piano-style playing was popularized and commercialized. This is "Era of On-Off Switches" that Roger refers to.

My main point is that there *were* highly expressive electronic instruments from the early days, and actually this was the original focus of electronic music. But it dwindled. Now it is coming back. Is it just a retro-movement, rediscovering the genius of those French and Russian pioneers? Clearly not. The new expressive devices are new ideas that are coming from people raised in the long research project into electronics that brought us tapes and filters and analog synths and FM and MIDI. The Theremin and Ondes achieved their own perfection long ago and found their place in avant-garde orchestras and bands, without a whole lot of technological change and without contributing to the grand research project beyond the initial stages.

So why not just return to the designs of Martenot and Theremin? Those designs were great and have earned a lasting niche in music, but the MPE devices of today offer something different. The big processing power available today offers methods of control of musical parameters that couldn't be done in early days. And I believe a key component of this is to marry expression and polyphony. The old designs were inherently limited in polyphonic capabilities, both for technological and ergonomic reasons. If the Linnstrument/Continuum/Seaboard were only monophonic, then they would arguably be mere variations on another early electronic instrument: the Trautonium. https://en.wikipedia.org/wiki/Trautonium
We could debate whether controlling volume with pressure and velocity, in the same hand that selects pitches, is better than with a sensitive button in the other hand. Good arguments could be made on both sides. But I think polyphony (not just some chords but fully independent lines) is a more remarkable achievement.

Regarding your question about LFOs and envelopes-- I think you have a point. They are not so important with expressive control. No need for them on the Theremin or Ondes. But they might be an important background component in designing a particular sound. At least envelopes with regard to velocity response.

Post

Great story, mbsq. Thanks!

Post

Roger_Linn wrote: Wed Apr 10, 2019 3:28 pm Are envelopes generators and LFOs--as well as static sound sources like oscillator waveforms and samples--still relevant in the era of expressive control, or they merely obsolete elements of the era of on/off switches?
It's my opinion that envelopes and LFO's are essential to the sound design process, and as such, they will remain integral to the performance aspect of expressive playing, MPE or otherwise. Expressive controllers may allow you to interact with a given sound in realistic ways, but that doesn't account for how the sound is created. Under the hood of every sound, particularly those which are synthesized from scratch, modulation is the primary ingredient.

Hell, even with sample-based instruments, it could be argued that the modulation is built into the physical instrument from which the samples were recorded. Using an MPE controller to play a drum sound, for example, be it sampled or synthesized, let's not take for granted what the source material has provided that makes the expressive control so convincing.

What I'm saying is, whether we're talking about mechanical modulation or the expressivity that comes from real-time human interaction, as I see it, those two things are not mutually exclusive. You must first emulate the sound before you can further manipulate it by hand. I certainly manage to employ the modulation matrix on my synths to great effect, and still have plenty of reasons to incorporate the LinnStrument's X, Y, and Z axes from there.

Cheers!

Post

Good points all, John. But an alternate view might include:

* ADSRs were merely stop-gap solutions to make on/off switches less..well..on/off. With expressive control, you can shape each note's envelope in performance. For sharp attack / long-decay sounds, a resonator or physical model of a string will naturally decay at a slow rate that naturally adjusts to loudness and pitch, eliminating the need for a fixed decay envelope. Or perhaps expressive control calls for replacing ADSR with a rise/fall accelerate/slew modifier, in which expressive increases in pressure can be sped up (for sharper attacks) or slowed down, and expressive decreases in pressure can be slowed down for long delays. But you still have expressive control.

* LFOs were another stop-gap solution for the inability to make a vibrato or tremolo with on/off switches, which is no longer necessary with expressive control.

* Static-waveform oscillators and samples produce one timbre regardless of performance loudness, which is fine for on/off switches but doesn't work well for continuous and responsive control of loudness, pitch and timbre. Malleable waveforms--either from modulated waveshapes or physical models--change in a natural and expected way in response to expressive control.

I can't wait to read the impassioned responses. :)

Post

Again though, Roger - and don't get me wrong, I agree with the sentiment of what you're saying - but it seems to me that we're taking for granted the basic building blocks involved in making said "malleable waveforms". Just because modern MPE controllers allow for more input from the player, which can improve the various performance aspects of a given sound, that doesn't mean you can forgo the physics of sound design, which is entirely based on modulation. The sound of a guitar string, for example, is a hugely complex and harmonically rich oscillator, the likes of which, if it were to be synthesized, would be built using a multitude of otherwise static modulators, long before it could be interacted with dynamically.

I think we may be oversimplifying the concept of the ADSR or LFO here. When I build drum sounds on the Tempest, I have velocity mapped to numerous modulations, most of which are modulators modulating modulators: i.e. envelopes modulating themselves to simultaneously change the curvature and duration of their various stages in direct response to velocity input, etc. It makes for a very fluid and realistic response, but it's still done entirely with LFO's, envelopes, and basic waveforms.

It doesn't matter what you want to call these modulations as a whole, no matter how complex, or whether you want to think of them as one equation or a bunch of little pieces; a larger algorithm is still just a series of integers, fractions, ratios, and what-have-you. We just boil it down to E = mc2 for the sake of simplicity. I can emulate wave-table behavior on the Tempest, for instance, even though technically I'm not using a cohesive series of single-cycle waveforms, but the end result is the same. So, as I see it, it's just an exercise in semantics to call it "a rise/fall accelerate/slew modifier"; which is just a fancy, all-in-one way of saying... And here's me being too lazy to break that down (smirk).

Cheers!

*Edit: typos... I can't let 'em go... Yes, it's a problem (sheepish grin).
Last edited by John the Savage on Thu Apr 11, 2019 11:35 am, edited 1 time in total.

Post

Good points, John. I have no problem with modulation, it’s just that as a modulation source, I prefer my learned finger gestures over static devices like ADSRs and LFOs that are intended to repeat one specific finger gesture over and over. I think my finger’s modulations make more interesting music, especially after a little practice. Plus, depending on how I choose to express each note, I can perform a linear or exponential curve or anything in between. And as a bonus, I don’t have to think about math. :)

Regarding the rise/fall accelerate/slew modifier, I’m not sure I explained it correctly. An ADSR provides the entire envelope for you, for on/off switch play. By comparison, what I was proposing modifies your performed envelope. For example, for long-decay sounds, it could slow down your performed finger release; for short attack sounds that would be difficult to strike fast enough, it accelerates your performed strike. But you still have to perform the envelope. I saw something similar to this in one of U-He’s synths but it’s still a newer idea.

And regarding malleable waveforms, I’d prefer the timbre to change naturally in response to how I play. I expect it to become simpler and less harmonically rich/complex when played softly, and when played hard to exhibit perceived high loudness artifacts like richer/harder harmonics and some added dissonance. But these would all be built into the sound source, removing the need to build complex modulations to achieve the same effect. And I don’t need to know what’s inside—subtractive, FM, granular, samples, physical models, etc.— it doesn’t really matter to me. What matters to me is to quickly select a responsive timbre that I really like, then perhaps make adjustments to a few simple music-focused controls like bright/dark, consonance/dissonance, thin/full, attack faster/slower, decay faster/slower, solo/ensemble, bass/mid/treble, room small/large etc. I’d prefer to focus on the music and performance technique rather than the engineering. I’d prefer to let the engineers do the engineering. When I put on my musician hat, I prefer to do music.

Post

Hmm... I think we agree with each other, at least in so far as the musicality of expressive control is concerned, but I don't think we're talking apples and apples here, Roger. The question was "are envelope generators and LFOs [...] still relevant in the era of expressive control?" My answer is, yes, they are absolutely still relevant, because you cannot create truly expressive sounds using finger gestures alone. At some point, math has to come into play (pun intended).

Granted, it could be said that there's no longer a need to use a simple LFO to reproduce something like, say, a vibrato; but the fact remains that static oscillators, LFO's, and envelopes (or the constructs thereof) form the very foundation of the aforementioned synthesis methods {subtractive, FM, granular, samples, physical models, etc.} which are being employed, within the structure of the sound itself, to make gestural playing possible. They're baked in. Just because the instrumentalist doesn't necessarily have to concern themselves with the math and engineering behind the sound, doesn't suddenly render these basic components of sound design irrelevant; quite the opposite in fact: i.e. arguably they're being applied more than ever, and with greater complexity than previously thought possible.

Besides which, if you aspire to use expressive sounds without knowing or understanding how they're made, or if you're not given the tools to make or modify them yourself, well... Ultimately, you're at the mercy of the engineer or sound designer, much in the same way that a violist is at the mercy of the luthier. And that's fine, I suppose, to a point. But not everyone wants to play saxophone or harmonica sounds with their LinnStrument, and even so, LFO's (and the likes) are still involved at the macro level in the making of those sounds, even if their application is not as obvious or superficial.

As a sound designer and instrumentalist myself, I certainly enjoy the musical benefits and freedom-of-expression that gestural playing brings to the table; however, despite the vast potential of modern computing power and all that we can do with it, I still recognize and embrace the fact that even the most basic elements of synthesis still play a foundational, if secondary, role in the sound design process. That's not trivial. FM and physical modelling, for example, both rely heavily on stacking and cross-modulating sine waves... Doesn't get any more rudimentary than that.

So, yes, envelopes, LFO's, and static-waveform oscillators are still relevant *at the component level. It's just that we no longer have to rely on them primarily as a way to approximate what could otherwise be accomplished more intuitively by touch.

It's ironic, perhaps, that we've gone to such lengths to achieve the kind of expression we've had for millennia with acoustic instruments (smirk). But then, electronic sounds are bound by nothing but the imagination, so I'm excited for the future in that regard.

Cheers!

*Edit: just corrected some typos for clarity sake... I hope (sheepish grin).
Last edited by John the Savage on Thu Apr 11, 2019 11:08 am, edited 3 times in total.

Post

Roger, I disagree on one point: That it is better for the musician to modify only a few parameters that correspond to “natural” sonic concepts. The problem is that some of these are scientifically dubious, like light/dark, and others are meaningful enough but mask a huge amount of complexity beneath psychological categories (thin/full, small/big reverb). I think it is really beneficial for the synthesist to be able to manipulate the truly basic components of a sound— harmonics! When we know how a thing is really working, then new possibilities open up. It’s like the difference between medical science and folk remedies.

Also, I think that in the examples you give about envelopes, you are really just describing more complex envelopes. We might go with purely gesturally generated envelopes, like with the early electronic instruments, but there will be some types of attack that you will not be able to physically produce with just one input device. You’ll never get a plucked string or a hard mallet sound by just measuring how fingers are touching soft pads. So you’ll have to introduce a synthetic envelope.

Post

Wonderful thread...
Envelopes: Yes I still want and need them, but different. With the switches I always liked the multi segment ones, now I don't need them any more, I prefer to play the envelope myself. But still I need Attack and decay times and on top I need an Attack level!
At the moment velocity is mapped to the complete ADSR. But what it needs is a map to an attack level and then I'd like to control the sustain level completely independent of the attack phase which is determined by the velocity.
Same applies for the release, that could have a segment more, or even a complete envelope. The logical way to interpret a release velocity is to mangle release envelope parameters in most synths its just the release which is mostly fine...
LFO: On each and every preset modelled for a keyboard I have to get rid of all the lfo modulation. Especially annoying are these delayed vibratos - just awful compared to a played vibrato... All that "Life" of an evolving patch is just lame compared to the way I would play it. Mostly that is done by incorporating more or less complex LFOs... That does not mean they are useless in general. Basically they are oscillators which could enrich the SOUND. Especially if they run fast...

With the LinnStrument I have both key elements of expressive control: hits and continuous exaltation. The envelope is essential for the hits...

A note to the origin story, I think polyphony in combination with advanced studio technology killed the expression. mbsq is correct pointing to Theremin, Onde Martenot and Trautonium (Moog started by building Theremins...). But in the beginning the synths had been monophonic, as most expressive acoustic instruments are as well. Even an expressively played guitar solo would concentrate on a single string. Jan Hammer was the master of expressively playing synths. He could get the same expression and sound world as any guitar player. If the keyboard is played by one hand you have the other to apply all kinds of expression...
The moment polysynths came up, the pianists would take over and apply their virtuosity to the synth. Of course both hands had been bound. Only a few could afford a CS-80 like Vangelis. The virtuosity of a pianist is of different nature as that of a violin player...
Thanks to Roger and the other expressive inventors we are back. Nowadays I would play any Trautonium or Onde Martenot piece with the LinnStrument. Its the perfect replacement, as it has the same expressivity and the way to play it is very close to the ancient versions. To replicate the sounds isn't an issue nowadays with all that VA technology. The Theremin is a different story, but its so hard to learn - and you would still get a comparable result with the LinnStrument... I lost interest in Theremins, but still love to listen to the masters (in Berlin there are some of them...)
Yes we need different synthesizer elements and sounds to get the best out of our expressive controllers...

Post

Good points, all, and too many to respond to. I do think that text can be a poor medium for describing advanced concepts. It would probably be easier in an interactive conversation around a lunch table. :)

Also, my initial question was intentionally provocative and I do see merits in ADSRs, LFOs, oscillators, etc.

Another way of stating my idea is that often we're making music that our limited tools prefer us to make. I'd prefer to make tools that better permit the realization of our unrestricted musical ideas.

Post

So, let's do lunch (grin)...

Cheers!

Post

Yeah!

Post

Is Randy from Madronalabs coming to Superbooth again? I know you are not, but to discuss this for lunch a synth developer is essential for sure... We can setup a skype session and have a nice meal across the globe...

Post Reply

Return to “Roger Linn Design”