(2025-08-12, 03:53 PM)borisanddoris Wrote: I've been feeding some data into chatGPT about this topic and this is how it explained it to me. I don't trust machines but this does seem to make some logical sense with my rudimentary understanding of audio. I really would like to get schorman to chime in on this topic. He's done extensive work in this realm.
"Decoding to **32-bit float** preserves any peaks that go above 0 dBFS during the decode process, preventing hard clipping that would occur if you went straight to fixed-point PCM.
Attenuating afterward lets you bring those peaks back into safe range without losing detail.
Because float has huge headroom and precision, you can do level adjustments without introducing distortion.
When you’re done, exporting to **24-bit PCM** gives you more than enough dynamic range for delivery while keeping file sizes reasonable.
This workflow ensures your final master matches the original’s dynamics and avoids irreversible clipping."
So I took a bit of time to look into some of the tech info more closely.
In theory, yes, Floating Point codecs are designed to preserve information that is over the limit, however I get the sense that this isn't really the case here unfortunately. If what TomArrow theorized holds water, the problem would then lie with the Foobar2000 plugin baking the decode within the limited 16-bit range.
Apart from this, to my understanding, given the fact that the difference between 16, 24 and 32-bit
Fixed Point only increases the dynamic range below the 0 dBFS point, it would mean that the mixer would have to somehow submit a 32-bit
Floating Point track for compression.
From what I could find online regarding the format, whilst floating point audio was technically standardized in the mid-80s, capabilities of actually storing that would've been extremely limited. Apart from that, the
APT-X1000 compression format alone originally supported 16-bit storage with 24-bit being a later advent, so from the mix would have to be crimped right from the get-go.
So even if TomArrow was right and distortion was introduced by the codec, unless I am completely stupid and misunderstanding how the format works, it would mean that the original mix would've been inherently a fixed point PCM, and that in order for the clipping to appear after decompression, it would've meant that the original mix would've had to have clipped as well.
Unless APT-X1000 altered the input mix's attenuation as part of the compression step, I don't believe it's possible to yield any additional information in a Float decode since the 0 dBFS would still have been the original roof.
In the
Tenet 64 Float example, this is one moment where the center channel clips. The first dry render was the mix left as-is, the second one I lowered by -0.01 dB.
![[Image: image.png]](https://i.postimg.cc/3R28DppP/image.png)
![[Image: image.png]](https://i.postimg.cc/MK66vnGF/image.png)
Given the fact that one hits the red and the other doesn't at that low of an attenuation combined with the graphs do lead me to believe it's not recovering any previously lost information. There's probably a more scientific way at going at this but I'd be astounded if there is a difference.
The only real use for Float codecs in this situation would be for post-decode attenuation, since in a theoretic example with
Mulholland Drive, the +9 dB on the LFE and +3 dB on the LRC channels would preserve the peaked data over that 0 dBFS limit. That said though, apart from inherent impracticality of a 32-bit Float delivery, the dynamic range of the mix would only be 18-bits (17.5-bits if you discarded the additional +3 dB that Lynch instructed) once you've offset the volume.
This opens the door for the aforementioned subtractive attenuation with a FLAC delivery as the efficient solution (since you can encode in bit depths like 17 and 18 in a padded 24-bit container to save space) or a DTS-HD MA / Dolby TrueHD delivery as the easy playback solution (with a more bloated full 24-bit encode but paired with dialnorm metadata so that the mix can playback at reference volume—albeit with the uncertainty of how the peaks will be compressed on-the-fly).