Hello guest, if you like this forum, why don't you register? https://fanrestore.com/member.php?action=register (December 14, 2021) x


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to decode 6-track APTX-100 (cinema DTS) with the correct channel levels
#31
from The Projectionist’s Guide to the DFP-3000:
Quote:all channels are full range 20kHz (even the subwoofer)

from the Sony DFP-D3000 SDDS decoder manual:
Quote:Setting the digital subwoofer low pass filter frequency to 100 to 200 Hz should be acceptable

from the DTS 6AD digital processor manual:
Quote:DTS derives the digital subwoofer by filtering out the surround signals from 80Hz and below.

from the Dolby CP65 Digital Cinema Processor manual:
Quote:Digital Subwoofer Channel
...
Pink noise is now present on the subwoofer channel
only (100 Hz bandwidth)
so my educated guess is that subwoofer frequency bandwidth is 100Hz as well
Reply
Thanks given by:
#32
Idk if this is misunderstanding the discussion but I always thought it would be interesting to do the Cinema DTS decoding into a higher bit depth so that there's more headroom. My reasoning is ... maybe the original track was NOT clipped, but the loss of precision through the band splitting, encoding etc. might have resulted in clipping when everything is put back together.

So basically, the individual bands and whatever would be decoded and then added back together and maybe that's the point where the clipping is introduced perhaps. Now if you decoded into, say, a floating point buffer, for the summing of the bands, then maybe this could be avoided.

I actually asked the developer of the plugin if he could do that but he wasn't willing to do it and it's not open source either so I can't do it myself. Has bummed me for a while ... maybe if someone else were to bum him about it in a way that does not seem coordinated, he might reconsider?
Reply
Thanks given by:
#33
I don't know enough about the APTX100 codec to say for sure, but if is a 16bit process by design then decoding to a higher bit-depth may not be possible
Reply
Thanks given by:
#34
My thoughts: as the Cinema DTS codec APT-X100 uses basically a refined ADPCM conversion, even if it's good - actually very good, considering they did not use psychoacoustic principles at all, a great feature in my book - it could clip in rare case; this "baked in" the clip into the actual encoding, so it's not possible to "unclip" it, unless you repair the track manually.

And even when it happens, it's usually limited to few single-sample peaks here and there over a 1.5/2hrs 6-track movie - not that difficult to repair.

Fact is: a proper hardware decoder made by the producer should always sound better than a software version made by a third party - even if I must admit I am very grateful for that!!!

Side note: maybe the clipping is not due to ADPCM conversion, or the software, but to the fact that DTS "pumped" their tracks at the beginning, to give them their "signature" (in particular exploding bass!)

From sdurani (a reputable member of AVSforum):
Quote:Their CAE-4 encoder was tested by Warner Home Video on 5 titles and found to add a .6 dB level boost, just enough to not be perceived as a level difference but instead sound like a difference in sound quality (old audio sales trick). Weird part is that the level boost didn't show up with the internal test tones that were used for level matching to other encoders, only in the program material. Clever. Warners was able to catch this for the three 'Lethal Weapon' DVDs, so the included Dolby and DTS tracks ended up being encoded at the same level. But it was too late for 'Interview With a Vampire' and 'Twister' DVDs, both of which had already shipped with the DTS level boost intact (can still be measured against the Dolby track).
Reply
Thanks given by: borisanddoris
#35
Well my theory (or maybe I should call it a hunch to not overstate it) is that the data inside each band is still not clipped. Basically, the codec, afaik, separates the audio into different bands by frequency before applying further compression. Each band gets its own specific compression ratio and whatnot.

Now, the process would be: separate bands, then encode each band, and write all into CDTS.

So the reverse process would be: decode each band, sum them, and output 16 bit.

Once you have decoded each band and you're summing them up again, there would be nothing stopping you from just allowing for some headroom. Specifically, let's say the normal output is 16 bit. Well, that's a so-called short in programming. You can just use a normal 32-bit integer to write the data into and anything that would have clipped at 16 bit no longer does.

Whether that's actually how it works.... idk. Just a hunch.

Edit: On a sidenote, I was trying to find videos on youtube explaining ADPCM and I gave up after trying 10 videos. Every. Single. One of them is by some Indian mumbling in broken English into a really bad microphone with the volume cranked up to 200000. I'm not doing that to myself! Big Grin
Reply
Thanks given by:
#36
I don't know if different decoding would make some difference; I took a look at ADPCM and I found several new encoding methods, but if they rely on legacy compatibility, they do not involve also a new decoding method, so *maybe* you could improve the encoding, but not the decoding that *should* be easier than the whole encoding process, that implies a lot of predictions - hence why sometimes it could clip... I guess?!? Wink
Reply
Thanks given by:
#37
So I got a hold of both the DS3 test disc (post 1999) and an Empirical Test Disc (1997 version) and ran both through Foobar2000 with no LFE boost. For the DS3 disc, I check the 1khz test tone that DTS suggests using to set the output levels of the DTS decoder correctly. I dropped that reel into Audition and found that the 1khz tone for the LFE was exactly 6dB too low and boosting it brings the tone to the matching level of the other channels. Hooray, proof!

The 1997 disc has a similar tone but it doesn't have one on the LFE track for whatever reason. Not that I can find at least. So I tried to utilize the RTA within Audition to match the LFE levels to the main channels using pink noise. In order to get it to match per DTS specs, I had to boost the LFE +9dB! That's a lot more than the 3dB we've all been using as a standard for sometime. It also hurts my brain when I try to figure out why that would be the case because all the Pre-1999 LFE tracks for DTS were recorded/encoded at a higher level than anything after 1999 so one would think that yes, 9dB would be too much. I decided to test this out on a film I'm familiar with to see what the deal was and grabbed a handful of tracks from Jurassic Park. I had the following to work with:

-Original 1993 .aud files (no boost, decoded in Foobar)
-Rip from DTS hardware
-DTS LaserDisc
-AC3 LaserDisc

What I found was that in order to get them all to relatively match, I had to do the following:

-The foobar rip needed a 9dB boost
-The hardware rip needed a 6dB boost (which was recommended by my expert source)
-The DTS LD needed to be attenuated about 2 to 3 dB (consistent with reports over time)
-The AC3 LD needed no changes

Ok, interesting. Fascinating. Bizarre. So I grabbed the first pre 1999 disc that wasn't Jurassic Park (Apollo 13) and while I didn't have other tracks to compare it with in an accessible form, I did find that the 3dB does work fine and that boosting it +9dB seems to introduce a bit too much clipping. But again, this could be by design.

So at this point, I'm still not convinced we're using the right levels for pre-1999 titles and it might warrant further investigation. DTS did state that all tracks pre-1999 were should still have the 10dB inband gain for LFE and that they were recorded in such a way at allowed this to happen when adjusting the output levels to 88dB...but maybe some things slipped through the cracks?

Most of the pre 1999 stuff I've done myself with a 3dB boost sound great but maybe there should be more? I might try to use Titanic as an example next as I have multiple versions of that track available too.
Reply
Thanks given by:
#38
So for Titanic, I used the following:

-Foobar Decode (+3dB LFE)
-DTS LaserDisc

To match the overall main channels, I had to lower the DTS LD -.1dB. Once I did this, the DTS LD LFE appeared 3dB LOWER than the Foobar rip with the boost. At least for the sample passage I used.

In other words, I almost feel Jurassic Park is an exception. I can't imagine DTS and Universal screwing up the first release in the format that much...so much scrutiny was on it. I also don't think +9dB makes any logical sense on all these pre 1999 films. But perhaps they did, or there are other factors at play that are way way over my head.
Reply
Thanks given by:
#39
It's been a while since I tried to wrap my brains around all of this stuff but I do remember that how the subwoofer level is measured make a difference, the 10dB in-band gain is observed when using a RTA (ie the output from the subwoofer at the specific frequency range shall be 10dB greater than the same frequency range on the main channels). If using a full-range SPL meter the gain will be approx 6dB or so, also SPLs aren't always accurate in that narrow frequency range.

I don't think clipping was an issue in the theatre (unless the track was encoded with clipping), as the gain was applied in the b-chain. I think some mixes even lowered the mix in all main channels and then requested an additional rise of the fader during playback in order to get even more out of the subwoofer. It's a problem for us as once the DTS-APTX is decoded it's in the 16/44.1 domain and applying the gain at that stage can/will introduce clipping, due to domestic formats expecting a LFE channel with 10dB in-band gain. The alternative is to adjust the LFE level (or the subwoofer) depending on the title but this isn't always feasible with modern AVRs or home speaker systems.
Reply
Thanks given by:
#40
What about the surrounds?
I haven't done any in-depth analysis, and I don't have any DTS hardware to work with; but there's a handful of titles where the BD and CDTS tracks are very similar, but after applying the -3dB gain the BD surrounds are usually louder.

I should also add that I've listened to quite a few tracks converted using the methods in this thread and in practice the levels always seem fine in comparison to the LCRs
Reply
Thanks given by:


Possibly Related Threads…
Thread Author Replies Views Last Post
  [Help] Sync separate audio track to mkv file JackForrester 11 7,298 2022-06-16, 10:57 AM
Last Post: Hitcher
  How to add a flag Surround, EX or ES on a track? Falcon 5 5,060 2020-01-22, 07:55 PM
Last Post: schorman
  [Help] Correct Bad Frame Rate Conversion CSchmidlapp 29 25,229 2019-05-24, 10:17 PM
Last Post: zoidberg

Forum Jump:


Users browsing this thread: 1 Guest(s)