MLV App 1.14 - All in one MLV Video Post Processing App [Windows, Mac and Linux]

Started by ilia3101, July 08, 2017, 10:19:19 PM

Previous topic - Next topic

0 Members and 3 Guests are viewing this topic.

masc

Quote from: [email protected] on February 20, 2020, 02:14:18 PM
But yes - I see exactly what you mean about the aliasing and moire - thanks for that ...... You know, though such shots might be 'objectionable', but there's a low percentage overerall. I'm wondering whether it would be possible to make 'changes' during the shoot like 'removing' such zipped clothing (hit and miss, I know), OR to somehow deal with these shots in post production by slightly blurring the affected region - granted, this would be easier on a 'static shot'.
The best idea should be using other lenses and maybe better settings: with a very fast lens wide open @ 180° shutter it becomes very hard to produce and to find moiree. Color moiree can be filtered by MLVApp, but luma moiree can't (atm). And what debayer algorithm was selected for this video? Looks like "bilinear" or "simple"...
5D3.113 | EOSM.202

ilia3101

Quote from: [email protected] on February 20, 2020, 02:14:18 PM
I've been experimenting with different horizontal resolutions between 1600 and 1832 pixels but the 'moire' always seems to be there on *certain shots*. Anyway, I guess it's a matter of how much I'm going to let this be a road block - (which it somewhat is) - since I like 'perfect shots' as much as most of us do .......
Always use the highest resolution you can - using a lower resolution simply crops the full image down.

[email protected]

@ masc My 'fastest' lens is the Sigma 18-35mm at f:1.8, but these M/L tests I've been doing (indoors) have been on a Tokina 11-16mm f:2.8 lens. As you mentioned, I've used 1/50 sec as my shutter speed, because of shooting at 24fps. (I *think* M/L gives me 1/48sec?). I'm not sure what filter Andrew used for his video, but I've been doing tests with 'AMaZE' which I'm finding really nice. - It may not be the sharpest, but I think overall it looks the most natural.
Oh - and when you say "Color moiree can be filtered by MLVApp" - which setting are you referring to? (please) .....

@ilia3101 ..... "Always use the highest resolution you can - using a lower resolution simply crops the full image down."
Ah ... that makes sense - I was wondering why the M/L 'calculator' kept increasing the 'crop factor' readout when I was clicking down through the frame sizes. Thanks!

IDA_ML

Please note that if you film at high resolution but watch the video on a lower resolution screen, aliasing becomes much more obvious.  What I do to avoid this is always downscale the video to the resolution of the screen that I will be watching the video on.

If however, you film in one of 3:3 modes, say 1736x976 and your screen resolution is 1920x1080, upscaling the video to the screen resolution only makes aliasing worse.   

masc

Did you use such lenses wide open? Closing the aperture also brings moiree. What you show in you video looks very unusual for AMaZE.
Use CA Desaturate / Radius in MLVApp. https://github.com/ilia3101/MLV-App/wiki#ca-desaturate--radius Works for CAs and color moiree.
5D3.113 | EOSM.202

[email protected]

@IDA_ML ....
Ah ... OK - that may explain things too. My Display is 1920 x 1080. For quality, I've been upscaling the 70D shots to 3K for adding CGI later (which tends to be 'sharp'). Then I'll finalize at 1920 x 817 and/or 3840 x 1634. Since I'm 'monitoring' at 1920 x 1080 it's not ideal. (Can't afford a proper broadcast monitor). Even so, I *feel* that it's more of a 'moire' issue than aliasing. Maybe they're linked? - don't know. - Thanks for your thoughts!

@masc ....
Yes, I used the f:2.8 (Tokina) 'wide open'. Thanks though, it's good to know wide-open is best. That'll work well with my plans. I've recently purchased ND filters, and intending to shoot everything at max aperture - love that shallow depth of field. (btw - the only vision I've put on this forum so far are (links) to a kitchen stove showing a motion problem, so when you refer to what you see in a 'video', I'm wondering if maybe you're thinking of Andrews video upload?) Wow - I'd not tried that Chromatic Abberation control. That's absolute magic! I'd seen it there but thought nah - the lenses didn't need CA correction. Amazing, and it works *really* well on 14 bit test shots - (was getting magenta & green fringing on edges) - This setting's totally removed them. Thank you!

[email protected]


I'm confused - can or does MLV App export ProRes 4444 video at 12bit color depth?  I simply need 12 bit video files exported directly from MLV App for importing into Vegas Pro 16. I don't want to use proxies or any file linking. Whatever I try, it ends up being 8 bit once inside of Vegas even though I've selected 32-bit floating point video levels in the Vegas Project Properties.

MLV App  >  Export 12bit Video  >  import into Vegas 16

Thanks ......

Danne


Should give highest resolution export files.
Do note that exporting to a log signal from raw. i.e cineon, alexa log etc is more important than bitdepth 10/12. No use exporting to rec709 with cut display referred signal(1.0) if you want to work with most maximum dynamic range/color space in preferred nle.

masc

Exporting with the ffmpeg options will bring 10bit ProRes, exporting with AVFoundation will bring 12bit (OSX only). 8bit ProRes can't be exported by MLVApp.
5D3.113 | EOSM.202

reddeercity

I found open source GPU NVIDIA Cuda acceleration for video apps
QuoteSet of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space and pixel format conversions
https://github.com/NVIDIA/VideoProcessingFramework

https://github.com/NVIDIA/VideoProcessingFramework/commit/29e07b227817d1d323054e84a80a52bda4d61bdc
support for FFMpeg

Is this useful ? since you guys use ffmpeg 

Edit: Here a open source GPU accelerated video and image processing  for mac
https://github.com/BradLarson/GPUImage2
Hope this can help

[email protected]

I'm using Windows, so I don't have the *AVFoundation option* for ProRes 4444 (only ffmpeg). Yes, I'm using Alexa Log.

Checking the ProRes4444 clip's 'Properties' INSIDE of Vegas, it shows 2560 x 1090 x 32 ? - which seems to imply it's 8bit + alpha?  When trying to use 'MediaInfo' to check it - I can't see any color depth reporting. All I'm trying to achieve is 12bit throughput without proxies. (wysiwyg).

Then I tried 16bit PNG sequences from MLVApp WHICH DO show in Vegas as 2560 x 1090 x 48, but even then I'm trapped.
● Rendering to almost all Vegas Codecs will result in 8 bit exports (even from these 16bit PNGs from MLVApp). (my flow is FX > render > re-import > add FX > render > reimport).
● If I *stay* PNG all the way, they double their size (Vegas turns them into 2560 x 1090 x64).
● If I export the MLVApp's PNG's from Vegas as Uncmpressed instead - it takes me back to 8 bit.
● If I export the MLVApp's PNG's from Vegas as Grass Valley Lossless, I'm also back to 8 bit.
● If I export the MLVApp's PNG's from Vegas as 'SonyYUV 10-bit YUV' - I finally get at least 10bits BUT it's YUV, and Vegas uses RGB internally so that's a needless YUV to RGB conversion every time I add FX.

I simply want to go from M/L 12bit DSLR through to a 12bit final edit in Vegas - and I'm not understanding - please, why am I having so much difficulty?

Danne

Your issues seems related to sony vegas not mlv app. I recommend export to dng files and use resolve instead to maintain 12bit raw quality pipe.

masc

@[email protected]:
No idea what Vegas does. 16bit PNG has 3x16bit, ffmpeg ProRes4444 has 4x10bit, no matter what Vegas tells. Maybe Vegas interprets it with such low bitdepth, but then this is (as Danne wrote) an Vegas issue.
What's the point you like to have such high bitdepth after grading? Debayered and WB-burned-in footage have "other" (values behind its) 12bit than 12bit RAW. If you like to keep all RAW information for you NLE, you should use DNG. But then you'll need a good RAW processing engine. And ProRes is also YUV.

@reddeercity: thanks for showing such projects, this is always very interesting to see, what (and how) other guys do. With CUDA the main problem is: I have no hardware to run such code. :( With OpenCL we would have a little chance, but what we found until now (e.g. bilinear demosaic) needed longer for copying buffers between RAM and GPU than MLVApp processes the entire picture. So it seems just to be better to have the entire pipeline on GPU - but this would be a 100% new version of our app then.
Very interesting was, that they got the ffmpeg libs used from their app. We tried that when we started with ffmpeg - without success. That's why we have this pipe solution now, which is indeed not very nice on Windows (cmd window).
5D3.113 | EOSM.202

[email protected]

Yeah thanks guys - so for a 12 bit approach (using Vegas) I'll go with PNG (or TIFF) exported out from MLVApp (accepting the larger file sizes) --- OR --- I'll go 10bit uncompressed out from MLV, then go with Sony 10bit YUV. It performs very well as an intermediate. -Just did a 25 generation A/B *comparative test* with Sony YUV across 25 forced recompressions - it looks fine - I'm happy ....

ilia3101


escho

I have some CR2 files from my EOS600D and wanted to convert these files into one MLV-file. That worked fine, but the orientation changes. That means: Source CR2s are recorded upright, but the resulting MLV has landscape-format. I cannot find an option in mlvapp to keep the orientation.

The same happens, if convert the CR2s directly with raw2dng. The resulting MLV is turned 90°.

Any ideas, what Iḿ doing wrong?

(mlvapp sourcecode downloaded and dcompiled just 2 hours ago)
https://sternenkarten.com/
600D, 6D, openSUSE Tumbleweed

masc

@escho: 90° rotation is not possible (yet).

@Ilia: looks like I should keep my 5D2 :)
5D3.113 | EOSM.202

ilia3101

Raw files are not rotated, it's all metadata, and MLV does not have rotation metadata. So I would need to add rotation to raw2mlv. Might do it, have been making LibMLV progress a little bit.

@masc Yeahh! 5D2 looks best. I'm starting to think there may be some truth to 'Canon colours' :) (I have no real experience with Nikon/sony though)

escho

@ilia:
Yes, just had a short look into the exif-data and the source files.  It's not possible atm. Thankyou for the hint with the exifs.

My workaround:
Import and transcode the CR2s into mlvapp, as they are, than exporting the MLV to (for example) mp4. The resulting mp4 goes another time through ffmpeg for changing the orientation of the video:

ffmpeg -i input.mp4 -c copy -metadata:s:v:0 rotate=90 Verpuppung.mp4

The result can be seen here in my starmaps-site:
https://sternenkarten.com/2020/02/23/verpuppung/
https://sternenkarten.com/
600D, 6D, openSUSE Tumbleweed

timbytheriver

@ilia3101

Thanks for sharing that info. Very interesting. Has anyone tried loading the 5D .ctl files (acting as IDTs) from the linked post into Resolve?

I am putting them into the LUT folder as described here http://colorizer.net/index.php?op=aces but Resolve isn't 'seeing' them at all.

PS This is the dropbox link that has all the sensor data files in: https://www.dropbox.com/sh/xepdrlu8qtubhhl/AADR_QuBf5Sn2WO7lp5MDNGOa?dl=0
5D3 1.1.3
5D2 2.1.2

timbytheriver

**Update**

As I understand it, the .CTL data needs to be converted to .DCTL for Davinci Resolve to recognise it and treat it as an ACES IDT function.

Can anyone assist with translating the language of the .CTL file from the ACES links here:


// Canon_5D_Mk_III - 3200K
// Generated on August 05,2019 10:40:40 AM

import "utilities";

const float B[][] = { {0.883020, -0.083165, 0.200145},
  {-0.008164, 1.114861, -0.106697},
  {0.048320, -0.441363, 1.393044} };

const float b[] = {1.589617, 1.000000, 2.151074};
const float min_b = min(b[0], min(b[1], b[2]));
const float e_max = 1.000000;
const float k = 1.000000;

void main (
input varying float rIn,
input varying float gIn,
input varying float bIn,
input varying float aIn,
output varying float rOut,
output varying float gOut,
output varying float bOut,
output varying float aOut )
{
float Rraw = clip((b[0] * rIn) / (min_b * e_max));
float Graw = clip((b[1] * gIn) / (min_b * e_max));
float Braw = clip((b[2] * bIn) / (min_b * e_max));

rOut = k * (B[0][0] * Rraw + B[0][1] * Graw + B[0][2] * Braw);
gOut = k * (B[1][0] * Rraw + B[1][1] * Graw + B[1][2] * Braw);
bOut = k * (B[2][0] * Rraw + B[2][1] * Graw + B[2][2] * Braw);
aOut = 1.0;

}



to .DCTL as described in this guide: https://drive.google.com/open?id=15AB3eZ9m78pT03nJNY8SO3t1IqnF23lt

It's waaaaay over my head!  :o

PS This may be off-topic – or not, as it might be useful in MLVApp also?

5D3 1.1.3
5D2 2.1.2

ilia3101

It is multiplying the raw channels by some gains (the array called b), clipping them (seems like this could clip reconstructed highlights if resolve does that???), then multiplies by their own matrix (called B). I don't see why they had to make it two steps, gains + matrix, could be one. I also don't understand why they clip the channels.

Just a really complicated way of applying a matrix. I can't imagine their matrix is 1000x better than Adobe's that will be in your DNGs anyway.

It's not too off topic. It's all relevant imo.

timbytheriver

I believe this CTL data is based on the spectral sensitivity analysis of the cameras (5D2 and 3) carried out in testing.

I thought it would be great to see how it performs in Resolve, but it needs to be in DCTL form, so it needs translating from CTL.

Apparently this is a straightforward job! Just not for me! ;)

Anyone fancy a go?

Maybe MLVApp can use these transforms also?

5D3 1.1.3
5D2 2.1.2

ilia3101

We could try their matrices in MLV App.. pretty easy, anyone could put it in the code and compile it.

Took this https://github.com/baldavenger/DCTLs/blob/master/Technical%20Transforms/AWG_to_Rec709.dctl and put in the matrix from the CTL you showed...

see if it works:

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
const float r = (p_R*1.589617f * 0.883020f) + (p_G * -0.083165f) + (p_B*2.151074f * 0.200145f);
const float g = (p_R*1.589617f * -0.008164f) + (p_G * 1.114861f) + (p_B*2.151074f * -0.106697f);
const float b = (p_R*1.589617f * 0.048320f) + (p_G *-0.441363f) + (p_B*2.151074f * 1.393044f);

return make_float3(r, g, b);
}

Danne