Oversampling to get rid of aliasing

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I'm a senior programmer but I'm totally new to DSP. So please forgive if this sounds trivial or even du*mb

I'm Building a vst synth. I'm trying to test band limited waveform generation using several methods then pick the best one for my needs. A bit puzzled about how over sampling is done. What I understand so far, is that I have to:

1. Do zero stuffing. introducing zero samples after every sample. (Counter intuitive to my brain!!)
2. Filter the signal with an LP brickwall at the original nyquist

Is that correct. If so, then what? Does that filter produce one sample for every two samples?
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

It's never a dumb question!
I wouldn't go the zero stuffing path. There is no such thing as a brickwall LP (especially since this would mean a high order filter, which can be unstable depending on their structure), so the best course of action (IMHO) is to go for interpolation. I'm using http://yehar.com/blog/wp-content/upload ... 8/deip.pdf and it works great in my implementation. For the reverse, just filter a little bit (providing you didn't alias too much) and it should be enough.
I have some examples on my blog on distortion examples where it works quite well.

Post

Thanks!!. nice. nice.

After reading a bit on your blog and browsing that paper. I think I have a few misunderstandings that I can't pinpoint. So, I will explain a bit what I'm trying to do.

I'm actually doing oversampling at the oscillator level (to be band-limited, if that makes sense). So I don't really need interpolation, I can generate the actual samples at any given sampling frequency. My two questions is:

1. Will that be band limited ? if I filter/under sample ?
2. If so, what to filter? i.e. at what frequency ?
3. If so, How do I undersample? In other words, how do two samples become one? :)
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

If your oscillator is bandwidth limited, then you shouldn't have a problem by just generating all the samples at the oversampled frequency. Then do your process, filter again in case your filter is not linear and undersample your result.
If you want your oscillator to be band-width limited (because it is currently not), then filter at the required frequency with something like a Butterworth band pass, with a medium order (6-8 perhaps?) and check the spectrum before your undersample to see if you managed to lower the aliasing effect.

Post

Oversampling an oscillator is the worst possible method to eliminate aliasing.

The best methods use the extra phase information before it is thrown away, before the signal is sampled to produce a band-limited signal and then that is sampled.

You can use such methods in combination with oversampling in order to simplify those methods. In those cases it is sometimes more efficient to generate an over-sampled version of the waveform, process it further (filters, waveshapers) and then finally decimate.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Thanks for the precision, I never tried to write oscillators ;)

Post

S0lo wrote:I'm actually doing oversampling at the oscillator level (to be band-limited, if that makes sense).
My reply was to this comment.

No, "to be band-limited" doesn't make sense, unless the method you're using takes into account the additional phase information before you sample the signal.

You need to understand that the waveform as it exists is initially abstract or imaginary. For example a ramp waveform going from -1 to +1 in some period of time. This concept isn't discrete-time, it is only an abstract concept.

We can compute the current phase and then use that value to create a sampled version, by for example:

phase += change
wrap(phase, 0, 1)
sampled = phase * 2 - 1

Our sampled signal now includes aliased harmonics! Before it was sampled into the "sampled" variable however it was alias-free and contained all harmonics (an infinite number).

It is at this point that we need to apply some sort of filter to cancel out the aliased harmonics before sampling takes place.

Oversampling then becomes a simple matter of generating more samples using the same algorithm. Due to the fact that your algorithm (a filter) likely removes harmonic content above some point combined with the fact the continuous version of the signal has harmonic content that naturally decreases by 1/n, by increasing the bandwidth and applying the same filter the aliased content will be reduced.

Note however that for each 6db (half) reduction of that aliased content we need to generate twice the number of harmonics. If we generate a signal approximately at nyquist, we have zero harmonics. By doubling the bandwidth (two times oversample) we have one harmonic at 1/2 amplitude.

A filter can be designed with a much higher slope than 6db/octave to be processed far more cheaply than the oversampled signal can be sampled. Due to this fact it is always less efficient to rely upon oversampling in place of a filter.

If you apply oversampling in addition to such a filter you must take into account that doubling the bandwidth will at most increase the cut by 6db. So, you can only decrease the complexity of the filter limited to what would result in 6db less cut per octave to get the same result. Sometimes this is beneficial, sometimes not.

In many cases the only advantage of oversampling is when you need those additional samples to be processed further, by a filter and/or waveshaper for example. Keep in mind however that the additional cost of processing all those extra samples may be far more expensive than other anti-aliasing methods.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

S0lo wrote: 1. Do zero stuffing. introducing zero samples after every sample. (Counter intuitive to my brain!!)
2. Filter the signal with an LP brickwall at the original nyquist
Just in case, there's very similar recent topic nearby. Maybe it'll be worth reading that one too (Keeping in mind of course that, as aciddose already mentioned, oversampling may not be the best method for antialiased waveform generation. Querying for keywords like DWP, BLEP, BLIT would give some food for thought of alternatives).

Post

S0lo wrote:1. Do zero stuffing. introducing zero samples after every sample. (Counter intuitive to my brain!!)
2. Filter the signal with an LP brickwall at the original nyquist
Briefly, since later it's apparent you're more interested in generating samples at a higher frequency and downsampling...Zero stuffing ("counter intuitive"): If you realize that samples represent instants in time (or impulses in amplitude), it obvious that you captured nothing between the samples, so you can represent any point between samples as a zero. By inserting a zero between, you've moved the sample rate up by a factor of two, but the images of the original audio band stay in the same place (because you did not add information—if it were a football field, you moved out one of the goal posts, but didn't change the field), so you need to erase the debris with the brickwall filter.
1. Will that be band limited ? if I filter/under sample ?
2. If so, what to filter? i.e. at what frequency ?
3. If so, How do I undersample? In other words, how do two samples become one? :)
If you generate your samples at 2x, and want to downsample back to 1x: (1) First, you never want to "under sample"—that means you don't have enough sample points to represent what you intend, and it will be aliased. (2) So, to reduce the sample rate to half, you need to first remove content at or above a quarter of the sample rate (96k->48k, you'd first filter everything 24k and higher). (3) Throw away excess samples. Yep. Because you filtered in step 2, the signal is now "oversampled" by 2x—it has more samples than needed to convey the information, by a factor of 2. Toss every other sample.

Now, in step 3, you might think, "but wait, the samples are not duplicates, they are different—and which should I keep, odd or even?" it doesn't matter—they re the sample thing—one has a time delay of half a sample compared with the other. They'll pop out of the ADC looking the same.

I have a cool video in progress that I think you'd like—being held up due to other projects and commitment, but I'll try to get it out in the next 3 week. Meanwhile, I have a an old article here (and some others, this is the most general along with the sampling theory article it references in the first paragraph): http://www.earlevel.com/main/2007/07/03 ... onversion/
My audio DSP blog: earlevel.com

Post

(unintentional repeat-click caused double-post)
My audio DSP blog: earlevel.com

Post

Thank you guys, I have allot to read and learn.
aciddose wrote:In many cases the only advantage of oversampling is when you need those additional samples to be processed further, by a filter and/or waveshaper for example. Keep in mind however that the additional cost of processing all those extra samples may be far more expensive than other anti-aliasing methods.
CPU usage is my main concern. It's the main reason I'm try to compare different methods. I need to be able to run at least 10 oscillators at once with FM or with LFO controlling the frequency. I've managed to run/modify a BLIT sawtooth/square oscillator, but those use sine cosine which are costly. I tried a wave table of sine for that BLIT method, but surprisingly got no improvement at all!!. I tried a fast sin approximation, but it's not clean with BLIT and still not so much of an improvement in CPU either.
Max M. wrote:Querying for keywords like DWP, BLEP, BLIT would give some food for thought of alternatives).
Thanks for that. definitely researching those.
earlevel wrote:(2) So, to reduce the sample rate to half, you need to first remove content at or above a quarter of the sample rate (96k->48k, you'd first filter everything 24k and higher). (3) Throw away excess samples. Yep. Because you filtered in step 2, the signal is now "oversampled" by 2x—it has more samples than needed to convey the information, by a factor of 2. Toss every other sample.
Cool, cool. Thats exactly what I wanted to know. Will be looking at your article too, , added to my bookmarks
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

As far as CPU, you want minblep. You can "steal" the implementation (it's trivial) from here:

http://sourceforge.net/p/protracker/cod ... /pt_blep.c
http://sourceforge.net/p/protracker/cod ... /pt_blep.h

You can even use that table if "almost good enough" is your aim. This should get you running in a hurry.

It's actually not the steepest filter, but if you're going to use oversampling you should set the "SP" value lower. For example, for 5x oversample set SP = 1, RNS = 63. This isn't the most efficient possible FIR implementation and inserting 40 samples is extremely expensive, but it will produce a reasonable result.

A better table would give you a steeper cut although make it more expensive. Ideally you could use a table designed to have a lower cutoff by using a better ratio than n/5.

I'd sort of appreciate a mention (like BSD), although you don't need to worry about it really. Maybe just "took minblep code and table (by aciddose) from protracker sourceforge project" if you were writing back-panel credits or something. More to let anyone else know the code is available.

Also note the assertions in the code there, you should enable them. You sometimes will see values like 1.00051 due to float rounding error but the offset should never otherwise be outside 0 to 1.

Er... also remove the redundant assertion bubsy put inside the loop... that guy. :hihi:
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

The problem of aliasing is that when you have something like a sawtooth waveform it contains an infinite number of harmonics, and those harmonics get mirrored/folded back around the nyquist frequency and 0 Hz repeatedly (imagine a strip of paper being folded into an accordion shape, that's what happens to the infinite spectrum of your waveform when it gets sampled) and no amounts of oversampling can fully remove that, you can't filter out what's already folded into your audible spectrum. I find it funny that so many people who write synthesisers would insist on using ludicrous ratios of oversampling when there's a much simpler solution.

Basically what you want is to cut down on the number of harmonics you've got, you only want the ones that won't fold back into the audible spectrum, you can achieve that by preparing different versions of your waveform in wavetables. Let's say you only oversample by 2x, that means you can go to 3x your original nyquist frequency before you get aliasing, that means you have 1+7/12 octaves of headroom. So then just make wavetables for different versions of waveform, each with let's say an extra octave worth of harmonics. So it starts with one with just harmonic 1 (a pure sinewave), one with harmonics 1 to 2, another with harmonics 1 to 4, then 1 to 8, 1 to 16 and so on until you have 1 to 1024. And then when you synthesise you calculate how many harmonics you need to reach the original nyquist frequency and you use the wavetable that has at least as many, and you just switch wavetables as you go along up or down. When your sound is synthesised you just do the usual filtering and resampling down and there you go, absolutely no aliasing or artifacts except for the ones that come with using wavetables (if you don't want to use wavetables and directly add up each sine be my guest). No filtering needed either except obviously the one you need before downsampling so it stays fast, even for preparing the wavetables offline you don't need filtering since it's additive, and you only need to oversample twice (you actually don't even need that much but it's simpler), not something ridiculous like 16x (and still getting some aliasing in the end).

Image
That's the principle you'd use to prepare your wavetables. Also see http://www.wolframalpha.com/input/?i=su ... %3D1+to+16

Beats me why more people don't do that. Is it a common technique by the way? I don't think I've ever heard of it but it just makes sense.
Developer of Photosounder (a spectral editor/synth), SplineEQ and Spiral

Post

Wavetables were used back in the 90s when we had very little processing power available. They have many disadvantages and once you increase the size of the table to compensate you end up being less efficient than other methods due to the extreme size of the table.

A technique like table based "minblep" is far more efficient. Only a single table is used which is adaptable to any waveform desired. Waveforms do not need to be pre-generated into tables and are fully adaptable in any way desired.

Table indices require interpolation but this only occurs while the impulse is being written to the circular buffer. With wavetables you need to continuously interpolate values from the table for every read, every sample. With a FIR based method the filter is only applied on discontinuities which occur rarely, for example only once per cycle in the common ramp (saw) waveform.

Due to fewer buffers being used repeatedly which are shorter they tend to remain in cache.
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

aciddose wrote:Wavetables were used back in the 90s when we had very little processing power available. They have many disadvantages and once you increase the size of the table to compensate you end up being less efficient than other methods due to the extreme size of the table.

A technique like table based "minblep" is far more efficient. Only a single table is used which is adaptable to any waveform desired. Waveforms do not need to be pre-generated into tables and are fully adaptable in any way desired.

Table indices require interpolation but this only occurs while the impulse is being written to the circular buffer. With wavetables you need to continuously interpolate values from the table for every read, every sample. With a FIR based method the filter is only applied on discontinuities which occur rarely, for example only once per cycle in the common ramp (saw) waveform.

Due to fewer buffers being used repeatedly which are shorter they tend to remain in cache.
MinBLEPs are a new concept to me, but it seems like it wouldn't be so great at higher frequencies, plus not very flexible. What I suggested is a variant of my idea of directly using additive synthesis (with no oversampling at all) which gives you total flexibility (for each overtone in amplitude, phase and pitch) and results as fast and perfect as your cosine approximation is (thankfully I've become quite good at that ;) ), but obviously if you need lots of audible harmonics that would be much harder on the CPU than BLEPs or additive wavetables. But think of the flexibility, you wouldn't even need to EQ your tone anymore!

Oh and since MinBLEPs are minimum phase, doesn't that mess up the phase of your signal? Well, not that it should really matter much if you're synthesising a sawtooth, just wondering.
Developer of Photosounder (a spectral editor/synth), SplineEQ and Spiral

Post Reply

Return to “DSP and Plugin Development”