Author Topic: MLV App 1.11 - All in one MLV Video Post Processing App [Windows, Mac and Linux]  (Read 534985 times)

Danne

  • Developer
  • Hero Member
  • *****
  • Posts: 7018
HLG seems to be some "HDR" format standard recently developed by bbc(starting in 2017). Looks like a kind of gamma approach based around hdr tv technology. Not sure if it reads metadata in file and apply the gamma straight in the tv or if it needs encoding in hlg or maybe both? Ben Turley has a few formats around HLG. Maybe it´s enough to apply a 1D lut here?


Here´s a pretty good explanation:

bouncyball

  • Contributor
  • Hero Member
  • *****
  • Posts: 810
Seems all my darkframes generated by MLV_Dump have the wrong framef count in metadata. My new MLVApp generated darkframes don't exhibit the same issue.
So... have I missed something while using MLV_dump?
Correct, mlvdump generated darkframe MLVs have wrong (original) frame count in the header. Unfortunately I did not correct this in mlvdump. Just put correct code into mlvapp.

bouncyball

  • Contributor
  • Hero Member
  • *****
  • Posts: 810
Hmm... that HLG stuff is interesting.

Danne

  • Developer
  • Hero Member
  • *****
  • Posts: 7018
Correct, mlvdump generated darkframe MLVs have wrong (original) frame count in the header. Unfortunately I did not correct this in mlvdump. Just put correct code into mlvapp.
Seems steroid mlv_dump version has the corrected file count:
in mlv_dump.c:
Code: [Select]
                if(mlv_output)
                {
                    if(average_mode)
                    {
                        file_hdr.videoFrameCount = 1;
                    }
Averaged output in Mlv App:

Mevi

  • New to the forum
  • *
  • Posts: 8
I've been wrestling with this problem for a few days and just can't find a solution. The conversion to a bt2020 colour space in FFMpeg always shifts contrast and the reds and/or saturation.

If I remove the colourspace conversion in the encoding, the footage looks fine, but I get banding if I push the grade - it's likely 8bit colourspace. The HDR is also broken as soon as SMPTE 2084 and BT2020 is taken out, so it's not an HDR10 file.


FOR CONTEXT: The H.265 hardware acceleration of the iPad pro makes this file format ideal for editing with smooth scrubbing and playback of 4K without dropped frames. The iPad Pro can edit Canon's new 8K files!
There's no ProRes support in iOS yet but I can import DNGs or other image sequences into the iPad and export them as HLG 150Mbs MP4 - that is currently my best workflow option and the output files are perfect. It's stupidly fast to render, but kinda clunky moving these image sequences around on the iPad. If I do end up working this way, I will concatenate all the clips of image sequences into a single video file.



but back to that colourspace conversion problem...

I've tried using uncompressed AVI and HuffYUV and even PNG sequence as the base format - DNG doesn't work at all in FFMpeg.

I've googled and googled and googled and find nothing that isn't related to HDR media playback. Obviously, I'm living life on the razors edge here.  :D

This is my .bat file for batch encoding AVI files. The "scale=out_color_matrix=bt2020" makes no difference at all BTW.

Code: [Select]
for %%a in ("*.avi") do ffmpeg -i "%%~na.avi" -pix_fmt yuv420p10le -vf scale=out_color_matrix=bt2020 -c:v libx265 -tune grain -profile:v main10-intra -x265-params level=6.2:hdr10-opt=1:hdr10=0:repeat-headers=1:no-strong-intra-smoothing=1:bframes=0:b-adapt=2:frame-threads=0:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50):max-cll=1000,400:vbv-bufsize=800000:vbv-maxrate=800000:crf=14 -preset slow -brand mp42 -tag:v hvc1 -c:a aac -b:a 320k "%%~na.mp4"
pause

Kharak

  • Hero Member
  • *****
  • Posts: 984
Can AVI contain the metadata for HDR/HLG?

I am surprised there is no support for prores on the Ipad, then again its a tablet, but I thought they unified the entire IOS, from phone to laptop?

Do you have a computer that you can test if the Prores or DNxHR work with the metadata?
once you go raw you never go back

Mevi

  • New to the forum
  • *
  • Posts: 8
The new Apple Silicon Macs can run iOS apps, but it doesn't work the other way. I can't see Apple merging separate revenue streams into a single product. They want me to buy a Mac to go with this iPad.  ::)

ProRes isn't supported on iOS and probably won't be. See the above paragraph.  :D My batch file does create all-intra files, which I guess makes that similar in concept. More expensive cameras have been recording h264 all-i for some years now.

I have a Windows PC. VLC might play ProRes. I can probably see the metadata too, but as an unsupported codec on iOS it's a dead end for me right now.

Your AVI colorspace metadata idea... that is something for me to investigate. The metadata of HuffYUV and uncompressed AVI output from MLVapp is just labeled "YUV".

Thanks Kharak

Mevi

  • New to the forum
  • *
  • Posts: 8
.
Your AVI colorspace metadata idea... that is something for me to investigate. The metadata of HuffYUV and uncompressed AVI output from MLVapp is just labeled "YUV".
Thanks Kharak

HLG/HDR10/rec2020/rec2100 really seems like it ought to be set up from acquisition. For me and most editors, it'll be taking that raw DNG sequence and putting it straight onto a HLG bt2020 timeline.

All of my talk of color space transfers was a rabbit hole of doom, but I'm much more informed now. I might not ever use a rec709 workflow ever again.

My poor old brain kinda hurts now, but switching to a totally bt2020 workflow will get the best out of our RAW footage. If we were to grade bt2020 colour space and render to HLG, we can upload that to YouTube to be played back in HDR on compatible devices.... You might even have one in your pocket.

Oh and my camera doesn't even arrive for another 2 days.  :D

BatchGordon

  • New to the forum
  • *
  • Posts: 6
Lately, I am studying by myself the Bayer filter and some demosaicing algorithms.
I think those included in MlvApp are great and the results are generally stunning.
What I'm not sure about is the correctness of the common way to proceed with videos recorded using the 1x3 binning pattern.
In my opinion, demosaicing the video then doing a horizontal stretch (or a vertical shrink) is going to lose part of the information captured with the raw image.
Personally, I would first expand the raw image unbinning the pixels with an ad-hoc algorithm, then I would apply to the resulting raw one of the existing algorithms (e.g. Amaze).
I could be wrong, but if you look at the pixels that are binned... they are spatially in a slightly different position compared to a normal Bayer filter.
I have an idea about how to do the unbinning, but I would like to have some suggestions on where to make the changes to the code.
BTW, I am a software developer, so my problem is not in coding, but not all the project code is clear to me.
If someone can help proceed with the test, that would be great.

theBilalFakhouri

  • Developer
  • Hero Member
  • *****
  • Posts: 561
@BatchGordon

The unbinning idea came two years ago, you may want to read from here:
https://www.magiclantern.fm/forum/index.php?topic=16516.msg210484#msg210484

A few posts later: the binned pixels can't be unbinned, the information of original unbinned pixels are lost forever.

It would be cool if someone find out how to process/stretch 1x3 data in a better way to enhance the quality.
700D 1.1.5 | no more ISOless LV err 8 / SDR104 @ 240 MHz - Constant! | Fixed Scrambled LiveView in Higher resolution | Real-Time correct framing in the Way

BatchGordon

  • New to the forum
  • *
  • Posts: 6
@theBilalFakhouri

Yes, I already read that post. It helped me to understand how the binning is done in these cameras. And I have seen other people had almost my same idea.

What you say is absolutely true, we cannot restore the information that is lost with the binning.
But still I think an unbinning process is needed before doing the demosaicing: the value for the unbinned pixels won't be the original one, just an estimation from the nearest ones.

My opinion is not that we can restore what is lost during binning, but that applying a demosaicing algorithm made for unbinned pixels won't give best results on a binned image. In other words, I think we are losing even more detail than what the binning process itself implies.
It's absolutely just an opinion and I could be wrong, but I would like to test it.

P.S.: I have seen your contribute on many parts of the project and I really consider it impressive!