Posts: 1,111
Threads: 26
Joined: 2015 Jan
Thanks: 687
Given 305 thank(s) in 206 post(s)
Hello Fanress
My process for projects has been to trans-code a Bluray (or what ever the original video file is) to a lossless codec for editing.
I use an AVIsynth script and then pipe it through Virtual Dub2, normally converting to RGB using the MagicYUV lossless codec
Ive converted from HDR to SDR using Avisynth and z_ConvertFormat & DGHable / DGReinhard in the past, and recently experimented converting straight from 10bit Rec2020 to RGB then using a LUT in premiere to monitor / convert thinking it would give me greater control.
This has always been in the 8 bit SDR land, and I'm still not set up for 10bit 4K really but would like to master in this for the future.
I was wondering how are people trans-coding / converting UHD's with HDR for editing?
Both staying in it's native format, and down converting to SDR.
Thanks for your time
Posts: 5,031
Threads: 175
Joined: 2015 Jan
Thanks: 3180
Given 2944 thank(s) in 1285 post(s)
Country:
I myself have been trying to get away from using AVIsynth/Vdub for projects. It has many useful tools for fixing terrible 1080i/p masters but for 4K and HDR, I don't like to use it. That has partly to do with my current affinity for prores. It just much easier to scroll, crop, cut and color in AP or Resolve. And it just handles 4K res more efficiently in NLEs over AVI variates.
My current method of conversion is to use FFMPEG to convert UHD to HD. I found bits and pieces of the correct coding all over the place but Chew helped me pull together the best coding. Here is an example:
Code: "C:\ffmpeg64.exe" -i "F:\Input.mkv" -vf "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable,zscale=t=bt709:m=bt709:r=tv,format=yuv422p10le,scale=1920:1080" -sws_flags sinc -f mov -c:v prores -pix_fmt "yuv422p10le" -profile:v 3 -an -y "F:\Output.mov"
This converts a 4K-2020-HDR MKV to 1080p-709-SDR-4:2:2 10 bit prores so I can edit or color it. Obviously, different settings for different results and you can substitute what parts you need in or out. The most logical alternative being converting to a simple 4K-2020-SDR 10 bit prores, if needed. You just need to leave in the HDR to SDR tonemapping. This code uses the Hable method of tonemapping which is the best overall but can be dark for some people. There are many alternative choices for tonemapping like reinhart:
http://ffmpeg.org/ffmpeg-all.html#tonemap-1
A few notes. First is another advantage of this method over AVIsynth is that I find the conversion faster, at least on my system. Now by faster I don't mean fast but much faster the other methods. Especially, given all the work it has to do. Second note is that FFMPEG's prores is considered "non-standard". Slightly off spec to apple's. This has never been a problem for me as every conversion has played fine in all my NLEs but none the less its worth noting. Also prores is a lossy compressed format by definition. I know that was part of the inquiry was lossless but in comparing prores to any AVI lossless variations, I have yet to see visible details lost. And I've tried a LOT of tests. As long as you don't use one of the low quality settings, you shouldn't have anything visually lost. If its good enough for the pros, its good enough for me. Lastly, for AVIs, you always have to use RGB as AP produces the incorrect colors on render if the AVI is in YUV-ish format. RGB colorspace takes a lot more storage. Prores is (mostly) YUV so the result is smaller files relatively and files that are rendered correctly by AP.
Hope this helps.
Addendum (notes for me in the future):
SDR 10-bit (Prores 4:2:2/4:4:4:4) back to HDR10 10-bit
Code: ffmpeg.exe -i "F:\input.mov" -pix_fmt yuv420p10le -c:v libx265 -preset fast -crf 18 -preset slow -pix_fmt yuv420p10le -x265-params keyint=60:bframes=3:vbv-bufsize=75000:vbv-maxrate=75000:hdr10=1:repeat-headers=1:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,500):max-cll=1000,400"" -c:a copy "F:\out.mp4"
DV conversion to SDR with jellyfin's ffmpeg
Code: ffmpeg -init_hw_device opencl -i input.mkv -vf "hwupload,tonemap_opencl=tonemap=bt2390:desat=0:peak=100:format=nv12,hwdownload,format=nv12" -c:v libx264 -c:a copy -crf 18 "output.mkv"
Posts: 1,111
Threads: 26
Joined: 2015 Jan
Thanks: 687
Given 305 thank(s) in 206 post(s)
2021-01-20, 11:41 PM
(This post was last modified: 2021-01-20, 11:45 PM by CSchmidlapp.)
Thanks once again PDB
So would a 4K-2020-SDR 10 bit prores file contain all the information contained in the original file?
In that I mean the whole contrast ratio, color space and bit depth.
If this is the case, by my understanding of HDR, then It wouldn't need to be tonemaped.
Which is where I'm struggling to get my head around things.
I won't lie I'm still alittle confused about what HDR actually is, and I'm probably sound really stupid
My understanding is it's a marketing name to explain the expanded contrast ratio, color space and bit depth compared to HD.
The metadata in HDR is there to correct the image to the given display specs based on it's nit values ect.
So with this metadata removed, it now plays back at what ever the peak nit its mastered to (e.g 1000nits) so if I'm watching it on a 350nit screen it clips the light above that.
There was a thread here where peeps tried to explain it to me but I guess I still didn't get it!
I remember that I was told my confusion came from it no longer being measured in Gamma?
deleted user
Unregistered
Thanks:
Given thank(s) in post(s)
I dont know about Premiere but After Effects in the newer versions is able to import HDR (not from HEVC obviously!) and if you enable color management in the project settings box, it will automatically convert the HDR to whatever colorspace you choose to work in, for example sRGB or DCI-P3 or whatever, and then you can do all your grading in that color space. For example you could convert the HDR to an RGB AVI using AVISynth (without the tonemapping, just ConvertToRGB64(matrix="Rec2020")) and then assign HDR PQ as the colorspace to the file in AE and set your timeline to whatever you want to edit in and then just drop it in. Note: Do it in 32 bit floating point mode because it naturally has superbright values which you might not be able to recover in the 8 or 16 bit mode.
Posts: 1,111
Threads: 26
Joined: 2015 Jan
Thanks: 687
Given 305 thank(s) in 206 post(s)
Thanks Tom
I did a transcode to RGB64 before, thinking exactly what you have just wrote.
From what I remember though I did get some errors on the Matrix = REC2020 because it was a YUV parameter.
Ill have a play around.
How far from reality is my sum up of HDR in the post above?
deleted user
Unregistered
Thanks:
Given thank(s) in post(s)
So basically, there's three aspects to each color space when it comes to video: The primaries/white point (defining how deep/saturated your color can go), the transfer function and YUV transform coefficients. Not sure if I used 100% the right words here, but the concept is roughly correct.
1. Primaries/white point
This basically defines how saturated the R, G and B components of the colorspace are. In the case of a display for example, that is defined by the chemicals used as dyes or whatever in the pixels. Your image cannot get any more saturated than the individual pixel allows it to be. I don't 100% fully understand the math behind all this myself yet but in a simplified way, you could say that it has to do with the spectral output of the pixel, aka where its peak is and how much "contamination" it gets from other colors. So for example let's say your pixel is green, but it also has contamination/output in the blue/red areas of the spectrum. That basically puts it a step closer towards reaching white, meaning the saturation is decreased. Well and that all is expressed somehow in I believe the XYZ color space. Each primary (red, green or blue) consist of yet another related pair of data related to the XYZ space called x and y. (small x and small y). This is how each color space is basically defined in terms of its gamut. With these parameters you can set up your own color space in Photoshop by going in the Custom RGB... menu.
This image you've probably seen many times:
You can see the x-axis is called x and the y-axis fittingly y. I don't know what exactly x and y are but basically, the bigger the triangle between the individual primaries (which are defined by coordinates in this diagram), the more colors you can reproduce and the higher the saturation you can achieve, tho theoretically you could have a gamut with a very saturated green but an undersaturated red etc. The grey area defines the gamut visible by humans I believe.
Edit: Forgot to mention what this has to do with HDR. Basically, Rec2020 defines three primaries on this diagram (x,y coordinates) that encompass a rather big space to allow for highly saturated colors and a lot of range of colors. When playing back on a TV, it will of course get mapped to/limited by whatever the TV's pixels can actually display.
2. Transfer function
Raw light is of course always linear. But for saving in a file, linear would need a lot of bit depth. So what we do is apply a transfer function. For SDR, that is usually a slightly modified version of a gamma curve. For example sRGB has an approximate gamma of 2.2 I think. Basically instead of saving linear values, we apply this transfer function (for example gamma) to save them. Thus we can increase the dynamic range with a lower bit depth.
For HDR, you basically have 3 options afaik: Something normal like gamma (that would really just be SDR, but with the wide gamut), HLG or PQ. PQ is pretty much the standard, used on all 4K HDR Blu Rays and most streaming.
The PQ curve basically assigns a fixed nit brightness value to each code value (actual RGB value). The PQ transfer function is more complicated than a gamma, but its the same concept - a curve that compresses the dynamic range.
Whenever you import something in a color managed application like Photoshop or After Effects and that software knows the color space (through an ICC profile), then it will basically reverse the transfer function again and thus restore the linear data. So you pretty much don't have to worry about any of this yourself for the most part. Whether After Effects unfolds a gamma curve or a PQ curve is all the same to you, except that after unfolding the PQ curve you will have superbright values (above 1.0). So you can then use something like the Exposure effect to bring the highlights back down.
3. YUV transform coefficients
All of this color space stuff basically happens in a form of RGB. YUV is a special thing that we use for video encoding (but also JPEGs and such). The YUV transform coefficients basically define how exactly the RGB gets transformed to YUV. This is basically what you're setting with the matrix="rec2020" or matrix="rec709" or matrix="Rec601" setting. Afaik, these are actually identical for Rec2020 and Rec709, funny enough. But I might be remembering wrong.
In other words, this ONLY matters when you are going from RGB to YUV and back. When you for example convert one kind of YUV (like YV24) to another (like YV12), that doesnt matter because the material is already in YUV and staying in that. Similarly, it doesnt matter for going from one kind of RGB to another.
About the HDR metadata: It's as you say, it helps the TV know what to expect from the material so it can correctly tonemap it to its own range. It doesn't really change the HDR material in and of itself. So the HDR metadata for example wouldn't matter for a fanres. However if you decide to make the final result of your fanres HDR too, then you'd need to run an analysis pass on your rendered HDR content to analyze for MaxCLL (maximum content light level) and MaxFALL (maximum frame average light level), so that it can be played back in the best way possible.
Dolby Vision is a whole other beast from what I've seen, and has way more aspects to it apparently, but people are already working on deciphering it.
Hope that helps. Let me know if anything is unclear.
Posts: 20
Threads: 0
Joined: 2015 Oct
Thanks: 4
Given 16 thank(s) in 9 post(s)
Country:
In my case I was able to get a HDR10 HEVC file into Davinci Resolve Studio 17 for editing simply by remuxing it to mp4. Resolve uses hardware acceleration for HEVC so the performance were acceptable to me.
Once I was done editing I exported it as DNxHR HQX 10-bit and encoded it back to HEVC using x265 while specifying the original HDR10 metadata. The output looks the same as the source.
Here is a screenshot of the project settings, the highlighted settings is a LUT to convert the video preview to BT709 so that it displays correctly on a regular monitor.
deleted user
Unregistered
Thanks:
Given thank(s) in post(s)
@ babouin That's great to know, thanks for sharing!
Posts: 5,031
Threads: 175
Joined: 2015 Jan
Thanks: 3180
Given 2944 thank(s) in 1285 post(s)
Country:
(2021-01-20, 11:41 PM)CSchmidlapp Wrote: Thanks once again PDB
So would a 4K-2020-SDR 10 bit prores file contain all the information contained in the original file?
In that I mean the whole contrast ratio, color space and bit depth.
If this is the case, by my understanding of HDR, then It wouldn't need to be tonemaped.
Which is where I'm struggling to get my head around things.
I won't lie I'm still alittle confused about what HDR actually is, and I'm probably sound really stupid
My understanding is it's a marketing name to explain the expanded contrast ratio, color space and bit depth compared to HD.
The metadata in HDR is there to correct the image to the given display specs based on it's nit values ect.
So with this metadata removed, it now plays back at what ever the peak nit its mastered to (e.g 1000nits) so if I'm watching it on a 350nit screen it clips the light above that.
There was a thread here where peeps tried to explain it to me but I guess I still didn't get it!
I remember that I was told my confusion came from it no longer being measured in Gamma?
I had written a long, long boring technical thing but simplified it down to this:
Video HDR is not a color system per say, its a luminosity system affecting such things as the peak brightness, grayscale, contrast, gamma, etc. Basically, it defines the background "luma" range and not colors. It does tangentially have an effect on color since when you change those values it affects the colors. But HDR doesn't inherently define said values of the colors.
In truth the color has more to do with the wide color gamut (WGC) part of DCI/P3 or Rec2020/2100/etc and RGB/YUV.
SDR is a grayscale from 0 to 100 with 0 being absolute black and 100 being peak white for a CRT. HDR goes from 0 to 10,000 nits theoretical max with a different system then gamma. Even though 10,000 is the max, often colorist will grade to whatever works for them: 200, 400, 1000, 2000, 4000, etc.
A video file has to have HDR in the spec to capture the full range of it. So, no prores, except the newer proresRAW, will be able to do HDR. You can kind of fake it by not doing the HDR to SDR conversion and expanding it back out with a LUT or SDR conform in AP but you will lose something.
deleted user
Unregistered
Thanks:
Given thank(s) in post(s)
(2021-01-21, 06:44 PM)PDB Wrote: Simplified:
Video HDR is not a color system per say, its a luminosity system affecting such things as the peak brightness, grayscale, contrast, gamma, etc. Basically, it defines the background "luma" range and not colors. It does tangentially have an effect on color since when you change those values it affects the colors. But HDR doesn't inherently define said values of the colors.
In truth the color has more to do with the wide color gamut (WGC) part of DCI/P3 or Rec2020/2100/etc and RGB/YUV.
While I agree, the way the industry uses the term HDR typically incorporates Rec2020 automatically alongside the higher luminosity range. I've not seen a HDR Blu Ray in Rec709 yet, nor an SDR Blu Ray in Rec2020, tho that doesn't mean they couldn't exist I suppose.
|