The truth about bit-depth (and digital audio ‘resolution’)

How to do this, that and the other. Share, learn, teach. How did X do that? How can I sound like Y?
Post Reply New Topic
RELATED
PRODUCTS

Post

This guy is making the same mistake that most people here are when talking about noise.

He thinks that when you try to cancel out an 8 bit file against a 24 bit file, and all you can hear as a result is noise/hiss, then there is no difference between the files, only some noise.

The glaring thing that he is missing, is that the noise IS the loss of detail. The tiny little bits and chunks of waveforms that are being distorted due to lack of resolution. It is not going to sound like an extra background singer, or reverb on the vocals, or adding a new trumpet track in the background. What was he expecting to hear? The tiny differences between 2 waveforms that were trying really hard to be identical, is going to sound like random little bits of dirt. Tiny, incoherent random noise across the entire thing.


At 8 bit, at 16 bits, at 24 bits, the computer is trying its earnest to make that waveform as close as possible to the original. At 8 bit it does an ok job, at 16 bits it does it with near inaudible error.

If 2 waveforms were truly identical, there would be no residue when you cancel them out. If it cancels out, they are identical. If you hear something, or see something in the analyzer graph, then they are not identical. Showing that there IS NOISE, is exactly the proof you were looking for that these waveforms are not identical.

Please point me to the video where someone tries to cancel out 8bit against 24bit and the result is 0 noise, just a perfect cancellation that shows that the waveforms are perfectly identical in their entire shape. I definitely want to see that video. Go ahead, I'll wait. :tu:

Post

Guenon wrote: Tue Feb 12, 2019 5:38 pmFrom this point forward, I almost might as well answer with quotes of what I've already said, anyway.
jochicago wrote: Wed Feb 13, 2019 12:06 amI'm focused on describing what's happening to the waveform when you stuff it into a sampling format.

[...]

I made this graph as a visual representation of what happens when you grab an analog audio wave and you put it in a digital wave file.
Guenon wrote: Tue Feb 12, 2019 5:38 pmIn my experience, "nicking and scratching the waveform" hasn't been that much of a talking point when dealing with state of the art tape equipment, so it's sort of humorous to see the (dramatically lower level) hiss in digital systems described in such terms -- just because of the mental imagery of how the hiss is brought into existence in a digital process.
jochicago wrote: Wed Feb 13, 2019 12:06 amThis is exactly how a wave file works. There are no caveats. It is not open to interpretation.
Guenon wrote: Tue Feb 12, 2019 5:38 pmAsk yourself: if you add noise to a waveform, how does that waveform differ from the original, from sample to sample basis? Observed up close?

Post

You guys talk about noise all the time. But there's a better term for sound that's not supposed to be there: harmonical distortion.

The difference? Static background noise is present even when there's supposed to be silence. Truncation noise (which is masked by dithering noise) is not present in silence.

So you'd better talk in terms of harmonical distortion or the THD%. When it's lower than 1% you have troubles hearing it. This is what has made the loudness war possible in the first place. I wonder whether SACD masters are less limited and thus have less THD.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

Guenon wrote: Wed Feb 13, 2019 7:39 am From this point forward, I almost might as well answer with quotes of what I've already said, anyway.
I don't think repeating yourself with the same wrong concepts is going to add anything. You haven't made your case and I don't think you have any supporting documentation for your position that the waveforms are magically identical when comparing digital lossy systems that diverge in level of detail by several orders of magnitude.

It is a completely senseless argument to make, that in terms of fidelity 24 bits = 16 bits. In that video that guy even says 24 bits = 8 bits. Since you think that the waves magically stay the same, why not keep going? Let's say 24 bits = 4 bits, or 3 bits. Let's just compress the song into something that can be played by a 1985 Nintendo game console and pretend it sounds identical, same waveform. Somehow, by the beauty of "un-intuitive" DSP logic you can compress a 40mb wave file into less than 1mb and it is the exact same quality!... maybe just a touch more noise. Come on.

Personally, I reached full understanding. This was a good exercise and it brought me to a lot of reading.

I don't have anything else to add. If you want to think 24 bits quality = 16 bits, ignoring all the specs discussed and having nothing to substantiate your claim, go right ahead. That's the true beauty in all of this, you don't need to understand how formats or your DAW work to make music, and at 16 bits files you have enough quality that it won't matter much. Enjoy.

Post

jochicago wrote: Wed Feb 13, 2019 8:35 am I don't think repeating yourself with the same wrong concepts is going to add anything.
the problem here is, it's you who is consistently wrong here, on a technical level. it's you who keeps repeating the same thing over and over, without any understanding of the underlying processes you're trying to describe and assess.
jochicago wrote: Wed Feb 13, 2019 8:35 am You haven't made your case and I don't think you have any supporting documentation for your position that the waveforms are magically identical when comparing digital lossy systems that diverge in level of detail by several orders of magnitude.
umm, yeah there is. it's called "Sampling Theorem". you are again using loaded language, which prevents you from fully understanding this topic.
jochicago wrote: Wed Feb 13, 2019 8:35 am It is a completely senseless argument to make, that in terms of fidelity 24 bits = 16 bits.
it is a technically accurate argument to make. within dynamic range of both formats, fidelity of both is absolutely, bit-for-bit identical. the difference is the noise floor. you're confusing two different aspects here.

in an 8-bit format, the noise floor is so high, that you indeed start losing "detail" because these details are lower than the noise floor. since 8-bit format has a dynamic range of about 48dB, this is clearly audible.

however, that is not the case for 16-bit - 16-bit format has plenty of dynamic range for you to not ever encounter any noise, unless all you do is jack up your playback volume and listen to John Cage's 4:33. the noise floor of the format is sufficient to prevent losing any fidelity. just because it's not matching a theoretical maximum doesn't mean it's worse - it isn't worse unless it's audibly worse.

have you run those ABX tests yet? or do you prefer blabbing on about "fidelity" without actually understanding the subject matter?
jochicago wrote: Wed Feb 13, 2019 8:35 am I don't have anything else to add. If you want to think 24 bits quality = 16 bits, ignoring all the specs discussed and having nothing to substantiate your claim, go right ahead.
you've already made up your mind before you started, and you haven't changed it, because all you do is misunderstand the technical details due to your insistence of using loaded language.
jochicago wrote: Wed Feb 13, 2019 8:35 am That's the true beauty in all of this, you don't need to understand how formats or your DAW work to make music, and at 16 bits files you have enough quality that it won't matter much. Enjoy.
yet again, you suddenly switch from playbkac to recording, and pretend like if i/others claim that if 16-bit format is OK for playback, it's also OK for recording. stop mixing the two, it's confusing the hell out of you.
jochicago wrote: Wed Feb 13, 2019 1:51 am He thinks that when you try to cancel out an 8 bit file against a 24 bit file, and all you can hear as a result is noise/hiss, then there is no difference between the files, only some noise.
that's because that's a technically correct explanation, and yes, that's exactly what happens. the magnitude of the noise is at issue here. or, to put it in other words, it's not a question of noise floor being present, it's a question of whether noise floor actually matters. for 8-bit format, it does. for 16-bit format, it doesn't, not for playback - it's so low you can't hear it anyway.
jochicago wrote: Wed Feb 13, 2019 1:51 am The glaring thing that he is missing, is that the noise IS the loss of detail. The tiny little bits and chunks of waveforms that are being distorted due to lack of resolution.
again with the emotionally loaded language. yes, they are being distorted. by -96dB or thereabouts. this difference is not audible, not for playback. you won't hear it, and neither will anyone else.
jochicago wrote: Wed Feb 13, 2019 1:51 am If 2 waveforms were truly identical, there would be no residue when you cancel them out.
that's not how digital audio works. digital is, by its nature, a finite, discrete format. it has limitations one way or the other, which means it will have a noise floor.
jochicago wrote: Wed Feb 13, 2019 1:51 am Please point me to the video where someone tries to cancel out 8bit against 24bit and the result is 0 noise, just a perfect cancellation that shows that the waveforms are perfectly identical in their entire shape. I definitely want to see that video. Go ahead, I'll wait. :tu:
you can do that yourself. in fact, i've just done it in my DAW.

step 1: get a tone generator and run a sine wave at, say, -12 dB.
step 2: send it to a parallel channel and invert phase (you should get complete silence on master)
step 3: put a bitcrusher on parallel channel and set it to 8 bit resolution

result: complete silence down to 8 bit noise floor (which is -48dB). the original waveform has cancelled out perfectly down to silence within the confines of the format. this definitively proves that, within the dynamic range, 64-bit float and 8-bit integer formats are bit-for-bit identical, and the only difference between the two is the noise floor. (-48 is without dither, with dither i can get the noise floor down to ~-72dB with a slope to -60dB at the high end of the spectrum)

in this experiment, no "details" were lost, because all useful signal was above the noise floor. the details of the original sound were preserved perfectly, as long as those details were above the noise floor.

or, to put it in other way, noise floor is what matters. for 16-bit format, it's at -96dB. the question that you keep evading is, does noise at -96dB matter? is it audible? the answer is: it is not. this is not just an opinion - this is a fact. test it with ABX if you don't believe me.
jochicago wrote: Wed Feb 13, 2019 12:06 am I'm not trying to describe noise. I'm focused on describing what's happening to the waveform when you stuff it into a sampling format. I used that expression to try to use something meaningful beyond the specs of the process, and also to connect it with the idea that you guys have that "the only difference between resolutions is noise". That's not how it works. It's close enough, but not it.
yes, that's exactly how it works, no matter how much emotionally charged language you are to use. within the confines of noise floor, the signal is preserved perfectly. what isn't preserved, are the details below the noise floor. details above the noise floor are preserved pretty much perfectly, as is shown with my sinewave test (that you can replicate yourself).
jochicago wrote: Wed Feb 13, 2019 12:06 am I made this graph as a visual representation of what happens when you grab an analog audio wave and you put it in a digital wave file.

Let's zoom in to the tiny tip of an audio peak:

wave peak.jpg

-

wave dots.jpg

I don't know that I can make it much clearer than this. This chart illustrates how the process actually works. No guesses or intuition here, this is what computers do.
this graph is why you misunderstand this subject - you have no understanding of what it is that you're looking at on these pictures.

what you've shown here is not what happens when you sample with different bit depth. instead, it shows what happens when you sample with different sampling rates. the "details" that you are referring to, are not really "details", they're frequencies. the picture perfectly illustrates what "band-limited signal" is. the details that "aren't preserved" on your illustrations, are ultrasonics - not "details lost due to lower bit depth".

please go to the very first page, and watch the Montgomery video, and try to understand what "band-limited signal" is, and why preserving these "tiny details" is not necessary for sound reproduction, and why, even though the waveform doesn't look the same after sampling at 44.1KHz, it still sounds the same within the confines of the format (i.e. up to 22KHz).

TL;DR instead of going on about things you don't understand, how about starting with this: do you think ~96dB of dynamic range (really, 110+dB, but let's pick worst case) and 22kHz of frequency range is enough to reproduce any piece of music imaginable? if not, why?
Last edited by Burillo on Wed Feb 13, 2019 10:08 am, edited 1 time in total.
I don't know what to write here that won't be censored, as I can only speak in profanity.

Post

jochicago wrote: Wed Feb 13, 2019 8:35 am
Guenon wrote: Wed Feb 13, 2019 7:39 am From this point forward, I almost might as well answer with quotes of what I've already said, anyway.
I don't think repeating yourself with the same wrong concepts is going to add anything. You haven't made your case and I don't think you have any supporting documentation for your position that the waveforms are magically identical when comparing digital lossy systems that diverge in level of detail by several orders of magnitude.

It is a completely senseless argument to make, that in terms of fidelity 24 bits = 16 bits. In that video that guy even says 24 bits = 8 bits. Since you think that the waves magically stay the same, why not keep going? Let's say 24 bits = 4 bits, or 3 bits. Let's just compress the song into something that can be played by a 1985 Nintendo game console and pretend it sounds identical, same waveform. Somehow, by the beauty of "un-intuitive" DSP logic you can compress a 40mb wave file into less than 1mb and it is the exact same quality!... maybe just a touch more noise. Come on.

Personally, I reached full understanding. This was a good exercise and it brought me to a lot of reading.

I don't have anything else to add. If you want to think 24 bits quality = 16 bits, ignoring all the specs discussed and having nothing to substantiate your claim, go right ahead. That's the true beauty in all of this, you don't need to understand how formats or your DAW work to make music, and at 16 bits files you have enough quality that it won't matter much. Enjoy.
I was going to reply just with quotes, and let it be :), but you are so close, and the above is such an extreme case of putting words to someone's mouth, I just had to quote it in its entirety and open my mouth, yet again, haha. Just please, relax, and take it all friendly like. You are reading quality sources, just interpreting them your way, and you are sincerely at it. I've been half waiting for you to "get it" and have a real lightbulb moment about this particular concept, as again, you are so close. Otherwise I wouldn't have bothered.

Yet it seems you think I'm saying something completely opposite. I'm not. If you aren't intentionally changing my message, and instead honestly interpret what I'm saying as quoted above, it just means you think I'm someone who isn't quite educated on these matters and just offering you a clueless online debate, and so you are just backhandedly responding in kind. Please don't do that. Actually just calmly consider what I'm saying. I'm on your side :)
jochicago wrote: Wed Feb 13, 2019 1:51 amthe noise IS the loss of detail. The tiny little bits and chunks of waveforms that are being distorted due to lack of resolution. It is not going to sound like an extra background singer, or reverb on the vocals, or adding a new trumpet track in the background. What was he expecting to hear? The tiny differences between 2 waveforms that were trying really hard to be identical, is going to sound like random little bits of dirt. Tiny, incoherent random noise across the entire thing.
Indeed, as I have also pointed out:
Guenon wrote: So take, say, any pristine 24 bit recording of a nice acoustic instrument. Do a bit depth reduction and make it 16 bit, or heck, even 10 bit. Flip the polarity and mix 1:1 with the original. What ever you hear, then, is the difference between those two signals.

There's no magic, no "true sound of an instrument", no "all the odd and even harmonics on that sound defining that a violin sounds like a violin and an acoustic guitar like acoustic guitar" there to be found in that difference.
Then, this thing:
jochicago wrote: Wed Feb 13, 2019 8:35 amSince you think that the waves magically stay the same, why not keep going? Let's say 24 bits = 4 bits, or 3 bits. Let's just compress the song into something that can be played by a 1985 Nintendo game console and pretend it sounds identical, same waveform.
You dislike magic in this context just as I do :), and I definitely don't think the waves stay the same, so we are in agreement there as well. It would be really strange and magical thinking to say the waveform is 1:1 the same. As you know, hah, there isn't any magic involved. Lower bit depth equals higher noise floor. In the resultant waveform, the noise is an error in the shape itself, as the resulting noise is of course a part of the waveform itself. Thus the output waveform isn't the same as the input waveform. I have said this before, see this from my previous post:
Guenon wrote: Tue Feb 12, 2019 10:44 amIf there is (extremely quiet, but still) more noise in a signal, how could the waveform be 100% identical? The noise is in the resultant waveform, which is different as a result. But then again, it really is just noise, so the waveform is preserved just as nicely as in any system adding very quiet noise. There's no magic involved that makes the result somehow "less true to the original instrument" in any other way.
All I'm saying is, the sum total end result is that you have very subtle noise in the waveform, and that's all it really is. Describing the methods used in digital audio systems that make this noise happen just lets you understand how it comes to be -- and at the same time, in the end, again, the noise is really really subtle. So when pondering delivery format bit depths, from a listener's perspective it's exactly the same as taking a super super high resolution recording, a theoretical one with no noise at all, then mixing in that same subtle noise at a level you don't hear, and then observing the resultant waveform, how the individual sample values jump compared to the original waveform. As in, "hey, they aren't in the same position as before, there's some noise in there."

When dealing with discrepancies that small, all kinds of mystical thinking comes easily into play (not talking about you but in general), and in essence, on my part I'm trying to demystify it a bit. Pun intended. Putting it into a correct proportional perspective; as in, "it adds noise in this manner, yes, and it's pretty darn subtle." Not in any way trying to put magical thinking into it, on the contrary trying to remove some of the often encountered magical thinking out of it.
BertKoor wrote: Wed Feb 13, 2019 8:23 am You guys talk about noise all the time. But there's a better term for sound that's not supposed to be there: harmonical distortion.

The difference? Static background noise is present even when there's supposed to be silence. Truncation noise (which is masked by dithering noise) is not present in silence.

So you'd better talk in terms of harmonical distortion or the THD%. When it's lower than 1% you have troubles hearing it. This is what has made the loudness war possible in the first place. I wonder whether SACD masters are less limited and thus have less THD.
Good post, and yeah, this is partly correct. Doing plain quantizing without dither, you don't get any noise where there is silence. And you also get distortion nasties that have different characteristics than "just noise", or tape hiss. However, the distortion isn't masked by dithering, and is instead remedied by it. When you observe that sort of distortion, and how high the harmonic artifacts peak on a spectrum analyzer, then replace that non-dithered quantizing error with the nice uniform error you get through dithered quantization, the peaks actually drop down, poof, and you get nice low level noise. It's a neat demonstration I've done myself in a signal lab :), and a particularly unintuitive one unless studied and experimented with.

This moment in Monty's video (@11:36) runs exactly that experiment: https://youtu.be/cIQ9IXSUzuM?t=696

Post

I don't want to get into a point by point.I disagree with a number of your statements. I'll hit the bigger ones.
Burillo wrote: Wed Feb 13, 2019 9:33 am umm, yeah there is. it's called "Sampling Theorem". you are again using loaded language, which prevents you from fully understanding this topic.
Nope. As the name implies, the sampling theorem is about sampling rate, not bit depth. And even then:
https://intelligentsoundengineering.wor ... ous-thing/

Let's side-track to sampling for a moment. In short you can make a mathematics case that 44.1k is enough sampling rate to cover "human hearing", but I'll add that (1) some people can hear beyond 22k, (2) you need room for a clean roll off in the last few k, and (3) a higher sampling rate helps to cope with unwanted distortion that can be introduced at many points, from your DAC system to quantization error. There is a strong case for sampling at 96k and few people would argue against it.


this graph is why you misunderstand this subject - you have no understanding of what it is that you're looking at on these pictures.
what you've shown here is not what happens when you sample with different bit depth. instead, it shows what happens when you sample with different sampling rates.
Nope. I'm showing both horizontal difference (more samples) to demonstrate increased sampling rate, and vertical movement (finer placement of the dots) to demonstrate that a higher-res system can record more precise detail. But you do make a ton of assumptions.

you've already made up your mind before you started, and you haven't changed it, because all you do is misunderstand the technical details
The funny thing is, that's what I think you are doing. You keep tripping yourself with things like the Nyquist/sampling theorem to answer the bit depth topic you don't understand with an answer that talks about sampling rate.

This is like the 4th article I quote on bit depth. From Presonus:
A higher bit depth enables the system to accurately record and reproduce more subtle fluctuations in the waveform (see Fig. 1). The higher the bit depth, the more data will be captured to more accurately re-create the sound. If the bit depth is too low, information will be lost, and the reproduced sample will be degraded. For perspective, each sample recorded at 16-bit resolution can contain any one of 65,536 unique values (216). With 24- bit resolution, you get 16,777,216 unique values (224)—a huge difference!
https://www.presonus.com/learn/technica ... -Bit-Depth

That's from the makers of S1. Please proceed to explain to me how stupid Presonus engineers are because of Nyquist.

Post

@jochicago stop thinking of digital samples in decimal values and logarithmic dBs, it's not intuitive for understanding how it actually works.
a) what your graphs shows is content above 20khz. Despite the fact how analog sounds, this wont go through most amps and speakers, what is at 1) 44.1khz will...
b) higher resolution means you can reconstruct the signal more faithfully, but it doesn't matter because everything in the end chain doesn't benefit of that faithfulness (higher frequency response and lower noise floor). Everything else is the same.

Even if you have super good speakers and super good amps, you ear will still only hear what's at 44100.
There's no nuance or detail, it's high frequency content that at worst, you cannot even reproduce and gets filtered out anyway, at best, you can't hear it because your ears are to slow to react to such rapid movement.

And this has only to do with sampling frequency not bit depth. You can still record ultra high frequencies even at 8 bits.

I will explain this to you in binary once more because, well, you have issues with understanding how this works.
____ ____ 0101 0001 0110 0100
0000 0000 0101 0001 0110 0100
(first 8 bits don't exist, or are blanked, or are filled with noise.)
top 16 bits don't change.
where's the added resolution?

Etienne1973 wrote: Wed Feb 13, 2019 12:59 am
I'm confused. :help:

1. Isn't this video the proof that the count of steps vertically (bit depth) matters at low signal levels?

2. Does this mean the higher the signal level the less bit depth matters?
1. Yes it matters, that's why it's advisable to record at 24bits and healthy input levels. In DAW, today most if not all processing happens at 32bit FP or 64bit so it doesn't matter anymore.
2. yes
Image

Post

Image

Post

I highly suggest "Digital Audio Explained" by Nika Aldrich.

don't quote presonus, they're trying to sell you newest audio interface, not make you understand audio.

also: it doesn't matter if "some" (very small subset of young people) can hear above 20k (i did when i was 20, up to 22ish), because nobody mixes that high up and instruments were not designed with that frequency range in mind.

because frankly, most of these things come from people over 25/30. which already have impaired frequency range. (i can hear only up to 19k)
jochicago wrote: Wed Feb 13, 2019 10:10 am Nope. I'm showing both horizontal difference (more samples) to demonstrate increased sampling rate, and vertical movement (finer placement of the dots) to demonstrate that a higher-res system can record more precise detail. But you do make a ton of assumptions.
What you fail to understand is that higher vertical resolution only results in less noise floor, because the signals, after being reconstructed, look identical.

You are not wrong that it samples at more detail, you are wrong at what that entails.
Image

Post

Ploki wrote: Wed Feb 13, 2019 11:07 am don't quote presonus, they're trying to sell you newest audio interface, not make you understand audio.
well, actually there's a salient point in there; they do point out that even the best 24-bit ADCs convertors are actually only accurate to 18-20 bit resolution in the first place. Presonus kinda supports an argument that anything much beyond 16-bit gets kinda moot.
my other modular synth is a bugbrand

Post

whyterabbyt wrote: Wed Feb 13, 2019 11:12 am
well, actually there's a salient point in there; they do point out that even the best 24-bit ADCs convertors are actually only accurate to 18-20 bit resolution in the first place. Presonus kinda supports an argument that anything much beyond 16-bit gets kinda moot.
Oopsy, i didn't bother to read all of it.
why did jochicago quote it then? it goes against his point

(talking about redundancy check out the new Steinberg 384/32bit integer interface)
Image

Post

jochicago wrote: Wed Feb 13, 2019 10:10 am Nope. As the name implies, the sampling theorem is about sampling rate, not bit depth.
yes, it is. however, first of all, you're constantly bringing up sampling rate in context of this discussion and show graphs relating to sampling rates as opposed to bit depth, and treat the two as if they were interchangeable and mean the same thing.

but more importantly, sampling theorem is directly relevant to the point i was making - namely, that given identical sampling rate, 16-bit and 24-bit signals will indeed be "magically identical", bar the noise floor inherent to the format. "sampling theorem" is a shorthand way of saying, for a system that stores a signal, any signal within its resolution is preserved the same way. higher resolution does not make you to store different signal - just one which has more dynamic range.
jochicago wrote: Wed Feb 13, 2019 10:10 am And even then:
https://intelligentsoundengineering.wor ... ous-thing/
yes, i've read that as well. the study itself notes that ability to distinguish 16-bit vs. 24-bit audio remains an open question, so it bears little relevance on the "number of steps" back-and-forth we've been having in this topic. more importantly, on the question of sampling rate, the difference was basically "a tiny positive result under very specific conditions and duration of test". in other words, even if i grant you this point, the results of the meta-analysis show that high-resolution audio is very far from "dramatic aural revelation" many people (yourself included, apparently) tout it to be.

as for the pretty graphs, it's really a 'duh'. band-limited signals behave that way, and they're supposed to do that. if your rise time is tiny, that means you have very high frequency content there (see square wave waveform in the Monty Montgomery video, it really explains this very well).
jochicago wrote: Wed Feb 13, 2019 10:10 am Let's side-track to sampling for a moment. In short you can make a mathematics case that 44.1k is enough sampling rate to cover "human hearing", but I'll add that (1) some people can hear beyond 22k, (2) you need room for a clean roll off in the last few k, and (3) a higher sampling rate helps to cope with unwanted distortion that can be introduced at many points, from your DAC system to quantization error. There is a strong case for sampling at 96k and few people would argue against it.
well, long as we're playing "whose link is better", here's an actual argument against it, made entirely on technical grounds: https://people.xiph.org/~xiphmont/demo/neil-young.html

in short, ultrasonic content (that's not audible) causes IMD to fold back into the audible spectrum. as in, "44.1K with steep low-pass is higher fidelity than 96K".
jochicago wrote: Wed Feb 13, 2019 10:10 am Nope. I'm showing both horizontal difference (more samples) to demonstrate increased sampling rate, and vertical movement (finer placement of the dots) to demonstrate that a higher-res system can record more precise detail. But you do make a ton of assumptions.
let's look at one at a time.

your example of "vertical resolution" is (again! how many times do i have to repeat this!) predicated on an assumption that the sampling resolution increases within the same dynamic range. in other words, you are assuming that, given point A and point B, 16-bit resolution gives you X steps between A and B, while 24-bit resolution gives you Y steps between A and B. that's not how it works. in both 16-bit and 24-bit resolution, the number of steps between points A and B is exactly the same. the added resolution merely enables you to cover C, D and E in addition to A and B (whereas in 16-bit, points C, D and E would've been below the noise floor and thus wouldn't have registered in the signal).

in other words, yes, you are misunderstanding how sampling resolution works. it does not increase resolution - it extends dynamic range. it leaves the old stuff as is, just adds a bit of new stuff, stuff you couldn't sample before. and it just so happens that this "new stuff" is added below -96dB. congratulations, you can now sample thermal noise in all its glory!

with regards to "horizontal" movement (i.e. more samples), this is again a misconception that you're having. a 44.1K playback will, in theory, reproduce everything up to and including 22K. in practice it's closer to 20K, but even young people don't hear that high (OK, some teenagers might - big whoop), let alone older ones. if your recording had ultrasonic content - yes, it would not be reproduced by a reconstructed wave, but that is not a problem. yes, the waveform does not look the same as the source, but it doesn't matter, because sounds the same to our ears. all of the extra stuff that was cut out and is not present in the original waveform, wasn't audible to begin with. that is how band-limiting works.
jochicago wrote: Wed Feb 13, 2019 10:10 am The funny thing is, that's what I think you are doing. You keep tripping yourself with things like the Nyquist/sampling theorem to answer the bit depth topic you don't understand with an answer that talks about sampling rate.
no dude, it is you who constantly flips back and forth between the two, and appear to be oblivious to the differences between them and what each of them stands for. time and time again, i explain to you things about bit depth, and your response somehow always includes references to sampling rate. how about you start addressing the points that are actually made, instead of falling back on drawing waveforms for the hundredth time?
jochicago wrote: Wed Feb 13, 2019 10:10 am This is like the 4th article I quote on bit depth. From Presonus:
A higher bit depth enables the system to accurately record and reproduce more subtle fluctuations in the waveform (see Fig. 1). The higher the bit depth, the more data will be captured to more accurately re-create the sound. If the bit depth is too low, information will be lost, and the reproduced sample will be degraded. For perspective, each sample recorded at 16-bit resolution can contain any one of 65,536 unique values (216). With 24- bit resolution, you get 16,777,216 unique values (224)—a huge difference!
https://www.presonus.com/learn/technica ... -Bit-Depth

That's from the makers of S1. Please proceed to explain to me how stupid Presonus engineers are because of Nyquist.
yes, i've read that article. which is why i never recommend anyone reading it, because it's inaccurate and full of marketing speak (and i don't think Presonus engineers wrote that - we all know who writes these things). i've also addressed this exact point a number of times already: all of the "millions of unique values" are below -96dB. from -96dB to 0, the exact same 65536 unique values are used to represent the signal, so as far as -96dB to 0 range is concerned, 16-bit and 24-bit are exactly the same.

the above presented Monty Montgomery video is far better and more technically accurate take on the subject.
I don't know what to write here that won't be censored, as I can only speak in profanity.

Post

I want to thank people for their detailed responses. I've been doing a lot of reading and learnt a lot about resolution through this exercise.

I feel, at least as far as my participation goes, that we've reached a point of entrenchment and it sounds more like bickering and repeating the same points.

I'll continue to track the thread and read any links, but I'm comfortable in my own understanding, and personally don't have more to add.

Post

my advise would be to read on about how sound is represented in binary in a PCM format. understanding this is crucial to understanding what different bit depths will give you (and what they can't give you).
I don't know what to write here that won't be censored, as I can only speak in profanity.

Post Reply

Return to “Production Techniques”