MLV App 1.14 - All in one MLV Video Post Processing App [Windows, Mac and Linux]

Started by ilia3101, July 08, 2017, 10:19:19 PM

Previous topic - Next topic

0 Members and 8 Guests are viewing this topic.

Danne

HLG seems to be some "HDR" format standard recently developed by bbc(starting in 2017). Looks like a kind of gamma approach based around hdr tv technology. Not sure if it reads metadata in file and apply the gamma straight in the tv or if it needs encoding in hlg or maybe both? Ben Turley has a few formats around HLG. Maybe it´s enough to apply a 1D lut here?


Here´s a pretty good explanation:

bouncyball

Quote from: MotherSoraka on December 21, 2020, 01:36:07 AM
Seems all my darkframes generated by MLV_Dump have the wrong framef count in metadata. My new MLVApp generated darkframes don't exhibit the same issue.
So... have I missed something while using MLV_dump?
Correct, mlvdump generated darkframe MLVs have wrong (original) frame count in the header. Unfortunately I did not correct this in mlvdump. Just put correct code into mlvapp.

bouncyball


Danne

Quote from: bouncyball on December 31, 2020, 04:08:03 PM
Correct, mlvdump generated darkframe MLVs have wrong (original) frame count in the header. Unfortunately I did not correct this in mlvdump. Just put correct code into mlvapp.
Seems steroid mlv_dump version has the corrected file count:
in mlv_dump.c:
                if(mlv_output)
                {
                    if(average_mode)
                    {
                        file_hdr.videoFrameCount = 1;
                    }

Averaged output in Mlv App:

Mevi

I've been wrestling with this problem for a few days and just can't find a solution. The conversion to a bt2020 colour space in FFMpeg always shifts contrast and the reds and/or saturation.

If I remove the colourspace conversion in the encoding, the footage looks fine, but I get banding if I push the grade - it's likely 8bit colourspace. The HDR is also broken as soon as SMPTE 2084 and BT2020 is taken out, so it's not an HDR10 file.


FOR CONTEXT: The H.265 hardware acceleration of the iPad pro makes this file format ideal for editing with smooth scrubbing and playback of 4K without dropped frames. The iPad Pro can edit Canon's new 8K files!
There's no ProRes support in iOS yet but I can import DNGs or other image sequences into the iPad and export them as HLG 150Mbs MP4 - that is currently my best workflow option and the output files are perfect. It's stupidly fast to render, but kinda clunky moving these image sequences around on the iPad. If I do end up working this way, I will concatenate all the clips of image sequences into a single video file.



but back to that colourspace conversion problem...

I've tried using uncompressed AVI and HuffYUV and even PNG sequence as the base format - DNG doesn't work at all in FFMpeg.

I've googled and googled and googled and find nothing that isn't related to HDR media playback. Obviously, I'm living life on the razors edge here.  :D

This is my .bat file for batch encoding AVI files. The "scale=out_color_matrix=bt2020" makes no difference at all BTW.

for %%a in ("*.avi") do ffmpeg -i "%%~na.avi" -pix_fmt yuv420p10le -vf scale=out_color_matrix=bt2020 -c:v libx265 -tune grain -profile:v main10-intra -x265-params level=6.2:hdr10-opt=1:hdr10=0:repeat-headers=1:no-strong-intra-smoothing=1:bframes=0:b-adapt=2:frame-threads=0:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50):max-cll=1000,400:vbv-bufsize=800000:vbv-maxrate=800000:crf=14 -preset slow -brand mp42 -tag:v hvc1 -c:a aac -b:a 320k "%%~na.mp4"
pause

Kharak

Can AVI contain the metadata for HDR/HLG?

I am surprised there is no support for prores on the Ipad, then again its a tablet, but I thought they unified the entire IOS, from phone to laptop?

Do you have a computer that you can test if the Prores or DNxHR work with the metadata?
once you go raw you never go back

Mevi

The new Apple Silicon Macs can run iOS apps, but it doesn't work the other way. I can't see Apple merging separate revenue streams into a single product. They want me to buy a Mac to go with this iPad.  ::)

ProRes isn't supported on iOS and probably won't be. See the above paragraph.  :D My batch file does create all-intra files, which I guess makes that similar in concept. More expensive cameras have been recording h264 all-i for some years now.

I have a Windows PC. VLC might play ProRes. I can probably see the metadata too, but as an unsupported codec on iOS it's a dead end for me right now.

Your AVI colorspace metadata idea... that is something for me to investigate. The metadata of HuffYUV and uncompressed AVI output from MLVapp is just labeled "YUV".

Thanks Kharak

Mevi

Quote from: Mevi on January 04, 2021, 11:59:54 PM.
Your AVI colorspace metadata idea... that is something for me to investigate. The metadata of HuffYUV and uncompressed AVI output from MLVapp is just labeled "YUV".
Thanks Kharak

HLG/HDR10/rec2020/rec2100 really seems like it ought to be set up from acquisition. For me and most editors, it'll be taking that raw DNG sequence and putting it straight onto a HLG bt2020 timeline.

All of my talk of color space transfers was a rabbit hole of doom, but I'm much more informed now. I might not ever use a rec709 workflow ever again.

My poor old brain kinda hurts now, but switching to a totally bt2020 workflow will get the best out of our RAW footage. If we were to grade bt2020 colour space and render to HLG, we can upload that to YouTube to be played back in HDR on compatible devices.... You might even have one in your pocket.

Oh and my camera doesn't even arrive for another 2 days.  :D

BatchGordon

Lately, I am studying by myself the Bayer filter and some demosaicing algorithms.
I think those included in MlvApp are great and the results are generally stunning.
What I'm not sure about is the correctness of the common way to proceed with videos recorded using the 1x3 binning pattern.
In my opinion, demosaicing the video then doing a horizontal stretch (or a vertical shrink) is going to lose part of the information captured with the raw image.
Personally, I would first expand the raw image unbinning the pixels with an ad-hoc algorithm, then I would apply to the resulting raw one of the existing algorithms (e.g. Amaze).
I could be wrong, but if you look at the pixels that are binned... they are spatially in a slightly different position compared to a normal Bayer filter.
I have an idea about how to do the unbinning, but I would like to have some suggestions on where to make the changes to the code.
BTW, I am a software developer, so my problem is not in coding, but not all the project code is clear to me.
If someone can help proceed with the test, that would be great.

theBilalFakhouri

@BatchGordon

The unbinning idea came two years ago, you may want to read from here:
https://www.magiclantern.fm/forum/index.php?topic=16516.msg210484#msg210484

A few posts later: the binned pixels can't be unbinned, the information of original unbinned pixels are lost forever.

It would be cool if someone find out how to process/stretch 1x3 data in a better way to enhance the quality.

BatchGordon

@theBilalFakhouri

Yes, I already read that post. It helped me to understand how the binning is done in these cameras. And I have seen other people had almost my same idea.

What you say is absolutely true, we cannot restore the information that is lost with the binning.
But still I think an unbinning process is needed before doing the demosaicing: the value for the unbinned pixels won't be the original one, just an estimation from the nearest ones.

My opinion is not that we can restore what is lost during binning, but that applying a demosaicing algorithm made for unbinned pixels won't give best results on a binned image. In other words, I think we are losing even more detail than what the binning process itself implies.
It's absolutely just an opinion and I could be wrong, but I would like to test it.

P.S.: I have seen your contribute on many parts of the project and I really consider it impressive!

togg

A couple of questions,

1) I'm getting the map of bad pixels out of track when copying it to other pictures. They still work fine but they look misaligned.

https://i.imgur.com/vBmTeFq.png

2) Is there a way to export the bad pixel map and use it in another projects?

masc

@togg:
1) how did you get this picture? What exactly does it show?
2) no need for this
--> you define the bad pixel map for any clip. This map is saved as .bpm in the MLVApp folder. As soon as you activate bad pixel map fix for any other clip using the same camera raw stream settings, the same .bpm will be used again. So ideally, you create this map once for your setting and use it forever for all future clips (in the same setting).
5D3.113 | EOSM.202

theBilalFakhouri

It seems MLVApp doesn't skip black borders (OB Zones?) in MLV silent picture files, here is a sample from 700D:
https://drive.google.com/file/d/1lZ91OUQ9BQN5kyBtrXpUa0-YTiz92V6J/view?usp=sharing

It's showing the full 5280x3528 image, it should be 5208x3478 (Effective pixels on 700D) when skipping black borders in the top and left

Danne

Interesting. mlv_dump to dng shows the correct zones.

Exporting to dng from mlv app seems to work. Only preview that is problematic.
Exporting to prores keeps the black borders. Needs fixing.

masc

Thanks for reporting. But in what mlv object do I find the info how to crop? I would expect it in VIDF, but all is "0":

RAWC the same... all "0".
5D3.113 | EOSM.202

Danne

Check default cropsize in a dng. Then check this place in mlv metadata:
Block: RAWI
  Offset: 0x00000034
  Number: 1
    Size: 180
    Time: 3.646000 ms
    Res:  5280x3528
    raw_info:
      api_version      0x00000001
      height           3528
      width            5280

      pitch            9240
      frame_size       0x01F16AC0
      bits_per_pixel   14
      black_level      2047
      white_level      15232
      active_area.y1   52
      active_area.x1   72

      active_area.y2   3528
      active_area.x2   5280
      exposure_bias    0, 0
      cfa_pattern      0x02010100
      calibration_ill  1

3528 - 52 = 3476
5280 - 72 = 5208


masc

Hm... something seems to be missing...

Can't find any 72 and 52 in the entire MLV structure.

Edit: here it is - very good hidden.

But when searching for "active_area", there is no line in processing code using it. So it seems to be a bigger work to integrate - it isn't just a bug.
5D3.113 | EOSM.202

theBilalFakhouri

You may want to compare it against mlv_lite @ 5208x3478 MLV which shows correctly in MLVApp without black borders, here is a sample:
https://drive.google.com/file/d/14Co4dvdnQ3w4WN8vlX0wd6ctF6YireWH/view?usp=sharing

@masc
How did show these information? I know it's possible with mlv_dump but which tool are you using?



Quote from: BatchGordon on January 05, 2021, 06:56:44 PM
My opinion is not that we can restore what is lost during binning, but that applying a demosaicing algorithm made for unbinned pixels won't give best results on a binned image. In other words, I think we are losing even more detail than what the binning process itself implies.
It's absolutely just an opinion and I could be wrong, but I would like to test it.

We all would appreciate if someone find a new way which enhance/maximize 1x3 quality files (I don't have an idea how it would be done), a1ex has done it before for Dual ISO files, he developed his algorithm over time, and we got fantastic results from Dual ISO images with less quality loss and less aliasing in processed Dual ISO files.

Quote from: BatchGordon on January 05, 2021, 06:56:44 PM
P.S.: I have seen your contribute on many parts of the project and I really consider it impressive!

Thank you very much! :)

masc

Quote from: theBilalFakhouri on January 28, 2021, 10:40:09 PM
You may want to compare it against mlv_lite @ 5208x3478 MLV which shows correctly in MLVApp without black borders, here is a sample:
https://drive.google.com/file/d/14Co4dvdnQ3w4WN8vlX0wd6ctF6YireWH/view?usp=sharing

@masc
How did show these information? I know it's possible with mlv_dump but which tool are you using?
Thank you! This file looks different in metadata. Here I see the correct size in RAWI: 5208x3478 (instead of 5280x3528 in your other file). No idea if that is the reason why it is showed correctly. Information in raw_info is slightly different: x1=72, x2=5280, y1=28, y2=3506.

You get this information, if you compile MLVApp in Debug mode, and start it with debugger. I set a breakpoint into MainWindow::drawFrame(). As soon as a frame is drawn, the debugger stops the app and I can see all internal variables. Here the complete screen:
5D3.113 | EOSM.202

Danne

Maybe better to change the silent dng/mlv code so it works the same as when recording with mlv_lite?

bouncyball

Quote from: Danne on January 29, 2021, 06:12:09 PM
Maybe better to change the silent dng/mlv code so it works the same as when recording with mlv_lite?
Exactly!

Different recorder modules use different values. They need to be fixed same way.

Danne

Care to look into it Bouncyball  8)? silent.c a good start I guess?

garry23

QuoteMaybe better to change the silent dng/mlv code so it works the same as when recording with mlv_lite?

Please don't break the silent module for us humble photographers  :) ;)

If someone is going to look at the silent module, then maybe a chance to tweak a few things, eg get EXIF working and the ability to change image file name via Lua  ;)

Danne

What would break? It's broken right now kind of.
I don't see photography and film features as two divided groups the way you describe it. Both approaches are fruitful to one another. And also please stick to recent discussed issue before presenting more personal wishes. Thanks