Magic Lantern Forum

Using Magic Lantern => Raw Video => Topic started by: mlrocks on July 17, 2021, 02:58:18 AM

Title: AI algorithms for debinning
Post by: mlrocks on July 17, 2021, 02:58:18 AM
I just did a test comparing three different modes on 650D: crop 1920x1080 24 fps, anamorphic UHD 24 fps, crop 3K 16 fps. The full resolution mode is not working, I tried but did not test. To my big surprise, the anamorphic UHD footages on my 27 inch computer monitor in full screen mode are much sharper than those of the crop 1080p, just not at the same level. The native resolution of the anamorphic UHD is 1280x1800, just about 20-30% more than that of the crop 1080p, yet looks way more delicious.  The anamorphic UHD looks very similar to the crop 3K 16 fps, almost at the same level.

This really surprises me, because I used to think that the anamorphic modes are just for simple uprezzing, the native resolution is the real thing. Seems that the sensor records all of the information, then Canon has a binning algorithm to combine 3 horizontal pixels to 1 fat pixel. If MLV App has a transformation/debinning algorithm to reverse engineer the Canon binning process, or has an AI algorithm to guess right the original 3 pixels from this fat pixel, then the anamorphic UHD will be very close to the native UHD. This way, the binning process is more like a compression process, with the proper codec, this is actually helping to fit in the maximal writing speed of the camera. After writing in the cards, the information can be read out again by the computer software. If this is true, then binning modes are better than the crop pixel by pixel modes, because there are more information recorded by the sensor and saved in raw files.

Title: Re: AI algorithms for debinning
Post by: Jonneh on July 18, 2021, 12:08:02 AM
In the anamorphic mode you mention, there's horizontal binning, and the desqueezing algorithm (and it does matter which one you pick) then creates the missing pixels. If it looks better to you than the crop 1080p, that could be the reduced crop factor (unless you are compensating with a different focal length, but then other confounds may come into play, such as distortion or sharpness, probably both in favour of the anamorphic mode), or it could be because the final image has a higher vertical resolution. I read somewhere that the eye is more tolerant of reduced horizontal resolution, though I'm not sure if that's actually true.
Title: Re: AI algorithms for debinning
Post by: theBilalFakhouri on July 18, 2021, 12:38:40 AM
There's no binning in the anamorphic mode you mention, just horizontal line skipping..

Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically, other ones like 650D skips vertical lines of pixels instead of binning resulting in more moire/aliasing in 1080p, by using *Anamorphic* 1x3 modes the camera bins pixels horizontally but instead if skipping vertical lines, we read all of them resulting in less moire/aliasing, more resolution and details.


@mlrocks

This topic has been discussed before, de-binning is impossible due to lost information while averaging (binning) the pixels.

There was a suggestion by a1ex some years ago which is enhancing de-squeezing algorithm for 1x3 (anamorphic) modes . . until now no one tried to work on it . . but we will never get 1:1 details from 1x3 footage.

If Topaz team added an option to de-squeeze real anamorphic footage (which upscales only horizontal video resolution) into Video Enhance AI, maybe this would give better looking 1x3 footage if we used the AI de-squeezing algorithm (just my guess, anyone tried to contact Topaz team :P).
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 18, 2021, 03:01:47 AM
Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically,

I stand corrected, thanks! (Original post edited to avoid misinformation). Although I knew about the horizontal binning, and that only the 5D3 had 3x3, I thought the anamorphic mode used line skipping. Good to know how it actually works.

Quick thought, and I'm sure I'm showing my ignorance here, but does one have the option of implementing (horizontal) line skipping instead of binning? If so, I wonder what the effect on aliasing would be if one alternated (cycled) the skipped pixels from frame to frame, such that you skip lines 2 and 3 in frame 1, 3 and 1 in frame 2, and 1 and 2 in frame 3 before repeating the loop. I imagine the illusion of temporal averaging would remove jaggies. Not so sure about moiré, but that might be improved too. One might see a shimmering effect at approx. 8 fps, but that might just look like inoffensive luminance noise, as long as the line alternation is done in both dimensions (and even that might not be necessary). Lots of speculation here though.

This may be moot if one is stuck with the native behaviour, but can't one sample the whole sensor and then discard unwanted lines before committing to memory? Maybe not. As you can tell, I don't have the first idea of how it works.  ;)
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 04:48:28 AM
Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically, other ones like 650D skips vertical lines of pixels instead of binning resulting in more moire/aliasing in 1080p, by using *Anamorphic* 1x3 modes the camera bins pixels horizontally but instead if skipping vertical lines, we read all of them resulting in less moire/aliasing, more resolution and details.


@mlrocks

This topic has been discussed before, de-binning is impossible due to lost information while averaging (binning) the pixels.

There was a suggestion by a1ex some years ago which is enhancing de-squeezing algorithm for 1x3 (anamorphic) modes . . until now no one tried to work on it . . but we will never get 1:1 details from 1x3 footage.

If Topaz team added an option to de-squeeze real anamorphic footage (which upscales only horizontal video resolution) into Video Enhance AI, maybe this would give better looking 1x3 footage if we used the AI de-squeezing algorithm (just my guess, anyone tried to contact Topaz team :P).

Thanks for the info. theBilalFakhouri. Hope Topaz Video Enhancer can do something about this. What was Alex's suggestions at the time?
If Canon raw has metadata for each pixel in its raw files, this may be helpful for restoration of the binned pixels.
Even though the 5.7k anamorphic raw is not as good as the 5.7k pixel by pixel raw, as long as the debinning is intelligent, the 5.7k anamorphic raw may be close in terms of image quality.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 05:04:39 AM
In the anamorphic mode you mention, there's horizontal binning, and the desqueezing algorithm (and it does matter which one you pick) then creates the missing pixels. If it looks better to you than the crop 1080p, that could be the reduced crop factor (unless you are compensating with a different focal length, but then other confounds may come into play, such as distortion or sharpness, probably both in favour of the anamorphic mode), or it could be because the final image has a higher vertical resolution. I read somewhere that the eye is more tolerant of reduced horizontal resolution, though I'm not sure if that's actually true.

Hello, Jonneh:

I used the same lens at the same distance, with different modes in Canon 650D. The scene has some trees (about 50% of the scene) and a compartment gate and a parking lot with cars. I want to see how different modes handle scenes of a lot of details. I tested the close-up shots in different modes also, but the resolution difference is not significant.
Cropped 1920X1080 24p has a closer look than the anamorphic UHD, i.e., the anamorphic UHD has a wider view than the cropped 1080p. So the anamorphic UHD is under more "stress".
I was really surprised that the anamorphic UHD looks much sharper than the cropped 1080p. I understand that the anamorphic UHD uses larger sensor size, therefore, maybe better aesthetics. But I actually feel that the resolution of the anamorphic UHD is noticeably better than the cropped 1080p. The resolution of the anamorphic UHD looks close to the cropped 3k mode, yet the native resolution of the anamorphic UHD is just 30% more than the cropped 1080p. I really don't know why, just share here an observation that is against my previous assumption that the anamorphic modes do not help resolution.

Title: Re: AI algorithms for debinning
Post by: Jonneh on July 18, 2021, 09:31:28 AM
Hello, Jonneh:
I was really surprised that the anamorphic UHD looks much sharper than the cropped 1080p. I understand that the anamorphic UHD uses larger sensor size, therefore, maybe better aesthetics. But I actually feel that the resolution of the anamorphic UHD is noticeably better than the cropped 1080p. The resolution of the anamorphic UHD looks close to the cropped 3k mode, yet the native resolution of the anamorphic UHD is just 30% more than the cropped 1080p. I really don't know why, just share here an observation that is against my previous assumption that the anamorphic modes do not help resolution.

You say that the anamorphic mode is "under more stress", which I take to mean that a given object will be smaller in the field of view, but are you comparing objects side-by-side, such that you can judge the resolving power? Otherwise, what you're noticing is likely a general impression, and as you say in your reply to Bilal, part of this is probably the upscaling algorithm doing its thing. Since the interpolated pixels are created somewhat intelligently, although no information is gained in the strict sense, the illusion is maintained that it is. I doubt any metadata is stored relating to the prebinned pixels, but I could be wrong. After all, there's not much to represent other than the intensity of the RGB pixels, so at best you could sample the discarded pixels at a lower bit rate, but then the file would be significantly larger. What's the resolution of the 1080p mode on the 650D, out of interest?

Since you have some trees in your scene, are they against a sky? If so, have you compared that area at 100% zoom between both of the modes. I know the standard viewing experience isn't one of a pixel peeper, but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes (which I tended to use for landscapes, often with trees in the scene). I now use (on the 5D3), one of the higher res crop modes with a wider lens. Are you using MLV App?
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 18, 2021, 01:35:24 PM
Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).  The difference in size is not 30% but 3,3 times (330%), regardless of the method the anamorphic image was obtained.  Yes, you record 1280 pixels horizontally in that case, to keep the bandwidth low, but these get stretched by a factor of 3 in post to 3840 pixels which is your final horizontal image size.  Also your vertical resolution (1800 p) is much higher than the 1080 p in the cropped case.  It is quite clear that a 6,9 Mp image will look much more detailed than a 2,07 Mp one that has been cropped, i.e. cut out and contains only a fraction of the sensor area and then blown up to the full 27" screen size.

 
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 18, 2021, 02:39:49 PM
Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).

Given the wonders of scaling algorithms, this is the key point (if fully dumb, such that they simply triplicated binned pixels, we'd have to resort to increased vertical resolution more-than offsetting decreased horizontal resolution, or somesuch, as an explanation for the anamorphic mode looking better). A fairer test would be a 1920x1080 image upscaled to 3840x2160, or something similar, and comparing the output. That way, the advantages of the increased input vertical resolution (before upscaling) in the anamorphic mode are divorced from output resolution.
Title: Re: AI algorithms for debinning
Post by: Levas on July 18, 2021, 03:26:03 PM
Hard to compare crop mode against anamorphic mode, hard to get the same field of view in both modes  :P

But for me the biggest difference is that in crop mode, the footage can be tweaked in post like it was a normal raw photo image.
No moire, no alliasing, just pristine sharp pictures  8)
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jagged edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:
but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes

Lately I'm trying to experiment a little with anamorphic mode, you can get it to look good, no jagged edges and such, but it won't be as sharp as crop mode.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.

Code: [Select]
ffmpeg -i M27-1724_frame_%06d.png -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4
The above works for an image sequence that is named M27-1724_frame_ ...and 6 digits... with png extension.
You could also stretch a movie file instead, of course one that isn't stretched yet:
Code: [Select]
ffmpeg -i Input.mp4 -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4

The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 03:40:57 PM
Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).  The difference in size is not 30% but 3,3 times (330%), regardless of the method the anamorphic immage was obtained.  Yes, you record 1280 pixels horizontally in that case, to keep the bandwidth low, but these get stretched by a factor of 3 in post to 3840 pixels which is your final horizontal image size.  Also your vertical resolution (1800 p) is much higher than the 1080 p in the cropped case.  It is quite clear that a 6,9 Mp image will look much more detailed than a 2,07 Mp one that has been cropped, i.e. cut out and contains only a fraction of the sensor area and then blown up to the full 27" screen size.


Hello, IDA_ML:

I agree with your analysis. I think the information is there in anamorphic modes, just need software skills to take it out. For pixel by pixel modes, the information is not there. I will use anamorphic modes more in the future.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 03:42:41 PM
Hard to compare crop mode against anamorphic mode, hard to get the same field of view in both modes  :P

But for me the biggest difference is that in crop mode, the footage can be tweaked in post like it was a normal raw photo image.
No moire, no alliasing, just pristine sharp pictures  8)
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jagged edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:
Lately I'm trying to experiment a little with anamorphic mode, you can get it to look good, no jagged edges and such, but it won't be as sharp as crop mode.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.

Code: [Select]
ffmpeg -i M27-1724_frame_%06d.png -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4
The above works for an image sequence that is named M27-1724_frame_ ...and 6 digits... with png extension.
You could also stretch a movie file instead, of course one that isn't stretched yet:
Code: [Select]
ffmpeg -i Input.mp4 -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4

The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

Hello, Levas:

Have you tried the Rawtherapee + Topaz AI workflow? Do you think if this will work better?
Title: Re: AI algorithms for debinning
Post by: Levas on July 18, 2021, 03:59:19 PM
Didn't test the workflow with Topaz (don't have it)

But I expect it to not give better results (if you're aiming to get less artifacts).
The Canon horizontal pixelbinning used in anamorphic modes is far unique in cinema/photo world.
The only ones using it are a bunch of ML enthousiast with old Canon cameras  :P

Topaz AI is made with normal images in mind, full pixel readout images, where no tricks like pixelbinning is used.
So it's not trained/made for images that are horizontal pixelbinned.
Therefore I don't expect it to give better results on anamorphic ML footage.

Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 18, 2021, 04:52:58 PM
... but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes (which I tended to use for landscapes, often with trees in the scene). I now use (on the 5D3), one of the higher res crop modes with a wider lens. Are you using MLV App?
 
Frankly, I am quite surprised to hear that you are having all these problems with the anamorphic modes.  I film wide angle landscape videography on the EOS-M using the anamorphic 4k (1280x2160)/24fps/10bit lossless mode all the time and results are fantastic! If the scene is exposed to the right but I make sure, I do not blow up the highlights, I never get the artefacts that you are talking about.  In high-contrast scenes, I typically increase the exposure until zebras start to appear and then dial it down by 0,5 to 1 stops.  Using a high-quality lens and VND filters, as well as precise focusing are a must!  Are you sure, you are not getting chromatic aberations? My landscape lenses are the EF-S 11-18, EF-24/2,8 IS and the EF 35/F2 IS. 

As far as postprocessing is concerned, I use MLVApp and it does a hell of a job when processing anamorphic MLV files, especially the latest v. 1.13.  Please try it if you haven't done this yet!  The default settings are great and if you don't use extreme adjustments you will get very pleasing results.  Typically, I export to ProRes and do the video editing in Resolve where I add some sharpness to my taste if necessary.  This is fully enough to compensate for the slight anamorphic softness that Levas mentioned.  That's all.

All in all, the 5k anamorphic and 5,7k anamorphic are my filming modes on the EOS-M and the 5D3, respectively.  These modes are a little tricky to use but once you learn how to sqweeze the maximum image quality out of them, you will never go back to other modes.
Title: Re: AI algorithms for debinning
Post by: Levas on July 18, 2021, 07:02:32 PM
The quality difference between crop and anamorphic is in the pixel peepers range.

But difference aside, even the anamorphic modes look a hell of a lot better then 4k/UHD clips from my phone   8)
Title: Re: AI algorithms for debinning
Post by: Levas on July 18, 2021, 07:05:05 PM
By using magic lantern raw video for years, you get used to the quality of it.
Sometimes I shoot a 4k/UHD clip with my phone, just to see a different quality level of footage  :P
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 08:04:04 PM
The quality difference between crop and anamorphic is in the pixel peepers range.

But difference aside, even the anamorphic modes look a hell of a lot better then 4k/UHD clips from my phone   8)

That was what I guessed. When I view the whole scene from my computer's monitor at a viewing distance of about 1 foot, I don't see those zigzags. In MLV App latest version, if a low contrast lens is used and sky is overcast, I use 81 contrast, 81 clarity (microcontrast), 81 chroma separation- sharpening. I don't see those zigzags when using anamorphic modes. The results are stunning 3D like. It is like seeing the outside through an opened window with a frame but no glass.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 08:23:31 PM
I start to believe that the anamorphic 5.7k raw on 5D3 is a true 5.7k raw with a compression ratio of 6 or above, as the horizontal binning has a compression ratio of 3 or above, and the 14-bit lossless LJ92 has a compression ratio of 1.5-2.5, depending on scene complexity and ISO level. The image quality may be "mediocre" comparing to the native uncompressed 6k raw, but should be much better than 1920x2340 if AR is 2.40.

Currently, Red Komodore has a 6K raw with compression ratio choices of 3 to 12, the same as the BMPCC 6K Pro. If implemented the same as the Canon R5, Canon 1DX3 and C500MK2 have 6k raw with a compression ratio of 6.

It will be very interesting if someone who can access these cameras does a test on the following cameras:

1. 5D3, Vista Vision, ML Raw, 6k anamorphic, compression ratio of 6 or above;
2. 70D, Super 35mm, ML Raw, 5k anamorphic, compression ratio of 6 or above;
3. EOS-M, Super 35mm, ML Raw, 5k anamorphic, compression ratio of 6 or above;
4. Red Komodore, Super 35mm, Red Raw, 6k pixel by pixel, choose option of compression ratio of 6 or above;
5. BMPCC 6K Pro, Super 35mm, BRaw, 6k pixel by pixel, choose option of compression ratio of 6 or above;
6. Canon C500MK2, Vista Vision, Canon Raw, 6k pixel by pixel, compression ratio of 6;
7. Canon C200, Super 35mm, Canon Raw, 4k pixel by pixel, compression ratio of 6;
8. Canon 1DX3, Vista Vision, Canon Raw, 6k pixel by pixel, compression ratio of 6;
9. Canon R5, Vista Vision, Canon Raw, 8k pixel by pixel, compression ratio of 6;
10. Arri Alexa Mini LF, IMAX, Arri Raw, 4k pixel by pixel, uncompressed (no option for compression?);
11. Arri Alexa LF, IMAX, Arri Raw, 6k pixel by pixel, uncompressed (no option for compression?);
12. Sony Venice, Vista Vision, Sony Raw, 6k pixel by pixel, compression ratio of 6 for Raw Lite XOCN;

The results can be mind blowing.

My bet is that all of these cameras will be similar in terms of image quality if not pixel peeping, as long as the operator is an expert on the camera and on the raw process to release the full potential of the system.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 18, 2021, 08:57:00 PM
Didn't test the workflow with Topaz (don't have it)

But I expect it to not give better results (if you're aiming to get less artifacts).
The Canon horizontal pixelbinning used in anamorphic modes is far unique in cinema/photo world.
The only ones using it are a bunch of ML enthousiast with old Canon cameras  :P

Topaz AI is made with normal images in mind, full pixel readout images, where no tricks like pixelbinning is used.
So it's not trained/made for images that are horizontal pixelbinned.
Therefore I don't expect it to give better results on anamorphic ML footage.
This makes sense. Thanks for the explanation.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 18, 2021, 11:12:34 PM
 
Frankly, I am quite surprised to hear that you are having all these problems with the anamorphic modes.  I film wide angle landscape videography on the EOS-M using the anamorphic 4k (1280x2160)/24fps/10bit lossless mode all the time and results are fantastic! If the scene is exposed to the right but I make sure, I do not blow up the highlights, I never get the artefacts that you are talking about.  In high-contrast scenes, I typically increase the exposure until zebras start to appear and then dial it down by 0,5 to 1 stops.  Using a high-quality lens and VND filters, as well as precise focusing are a must!  Are you sure, you are not getting chromatic aberations? My landscape lenses are the EF-S 11-18, EF-24/2,8 IS and the EF 35/F2 IS. 

As far as postprocessing is concerned, I use MLVApp and it does a hell of a job when processing anamorphic MLV files, especially the latest v. 1.13.  Please try it if you haven't done this yet!  The default settings are great and if you don't use extreme adjustments you will get very pleasing results.  Typically, I export to ProRes and do the video editing in Resolve where I add some sharpness to my taste if necessary.  This is fully enough to compensate for the slight anamorphic softness that Levas mentioned.  That's all.

All in all, the 5k anamorphic and 5,7k anamorphic are my filming modes on the EOS-M and the 5D3, respectively.  These modes are a little tricky to use but once you learn how to sqweeze the maximum image quality out of them, you will never go back to other modes.

Yeah, I did persevere for a while, but the same issues kept reappearing. Without pixel peeping, it's mostly trees against the sky that look bad---although there it's very obvious. Zooming in reveals other issues, but that would be OK if it weren't for the first set of problems. Trees matter! ;)

Most of my lenses are vintage, but the wide angle one is my sole posh modern, the 16-35 F4L IS: sharp as you like, and practically free of CAs. The artefacts are often like Levas' image, posted to show the same thing, but I know I've seen some funkier cyan and yellow mixed in:

https://www.magiclantern.fm/forum/proxy.php?request=https%3A%2F%2Flive.staticflickr.com%2F65535%2F50140581561_3e308f2153_o.png&hash=8fffef0a7c7210f835a48404f3f54511

No blown-out highlights---I also use zebras and then dial things back---and I focus precisely on my subject. If being slightly out of the plane of focus means that the trees will inevitably show artefacts, I'm likely to resort to crop mode for most shots, unless I absolutely need a wider field of view than the 16mm end of my lens affords. No NDs---for the dusky scenes I tend to be filming, I don't find I need them (while staying at or below the approximate diffraction point of my lens). I also use MLV App to postprocess (default settings), although it's true I haven't tried 1.13. It's a shame, because I love the stability of the mode and using the full 16mm of the above lens instead of having a 24mm equivalent with added perspective distortion.

Still, these issues cropped up while filming on a trip, such that I didn't have time to do a thorough diagnosis. I'll find the time to sit down an do one, and I'll post some DNGs in the meantime, once I'm reunited with my material.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 18, 2021, 11:13:16 PM
To prove my statements that there is barely an image quality difference between the 5,7k 1x3 anamorphic footage on the 5D3 and the actual CR2 still image of the full frame sensor in that camera, I have just performed the following experiment:

1) I filmed a short clip in the 1392x2340 resolution at 24fps, 14bit lossless in the 5,7k anamorphic mode.  At this resolution, the crop factor is 1,38, so the final frame size is 4174x2340 pixels;
2) Then I shot a 27 MB still CR2 image of the same scene using the same lens;
3) Both the clip and the still image were opened in MLVApp, processed to my liking and then a frame grab from the clip was exported as a JPG.  The same was done with the CR2 image.
4) The CR2 JPEG was opened in Photoshop, slightly cropped to achieve nearly the same vision as the frame grab from the clip and then saved without further processing for comparison.
5) Both JPEGs were then compared in FastStone Image viewer. 

All results, including the original RAW files are ready for download here:

https://we.tl/t-D56tL8qor2

Comparisons at 100% magnification are also included for the pixel peepers.  The above link will be active for 7 days.

As you can see, there is barely a difference in image quality between both scenarios.  In fact, watching them on my 30" (2560x1440) monitor in full screen from about 1m distance, I could not tell which is which.  Even at 100% magnification, the differences are barely perceptible.  I do not see any disturbing artefacts or aliasing either.  Continuous recording, low crop factor and stable camera operation make me feel that I can continue using  the 1x3 anamorphic modes and MLVApp with confidence.   

Happy pixel peeping!
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 19, 2021, 12:41:16 AM
Nice comparison IDA_ML---thanks for that.

Indeed, viewing as you describe, I'd be hard-pushed to pass an A/B test too. At 100%, a few artefacts certainly appear: banding either side of vertical objects (lamp posts), zig zags which would otherwise be clean (tramlines), some jaggies (cable on roof in foreground), horizontal stretching in fence poles, general lack of definition in traffic signs, and the reds look rather muted (could be because of editing). But now we're in full-on pixel-peeping mode, belong in another forum, and probably won't ever film anything beyond a brick wall and a studio test scene.

The problems I describe are visible in the first scenario, but it's time I produced some images to back this claim up. I'll do a comparison once I'm back with my equipment, and we'll see if it's much ado about nothing or not.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 19, 2021, 12:49:11 AM
To prove my statements that there is barely an image quality difference between the 5,7k 1x3 anamorphic footage on the 5D3 and the actual CR2 still image of the full frame sensor in that camera, I have just performed the following experiment:

1) I filmed a short clip in the 1392x2340 resolution at 24fps, 14bit lossless in the 5,7k anamorphic mode.  At this resolution, the crop factor is 1,38, so the final frame size is 4174x2340 pixels;
2) Then I shot a 27 MB still CR2 image of the same scene using the same lens;
3) Both the clip and the still image were opened in MLVApp, processed to my liking and then a frame grab from the clip was exported as a JPG.  The same was done with the CR2 image.
4) The CR2 JPEG was opened in Photoshop, slightly cropped to achieve nearly the same vision as the frame grab from the clip and then saved without further processing for comparison.
5) Both JPEGs were then compared in FastStone Image viewer. 

All results, including the original RAW files are ready for download here:

https://we.tl/t-D56tL8qor2

Comparisons at 100% magnification are also included for the pixel peepers.  The above link will be active for 7 days.

As you can see, there is barely a difference in image quality between both scenarios.  In fact, watching them on my 30" (2560x1440) monitor in full screen from about 1m distance, I could not tell which is which.  Even at 100% magnification, the differences are barely perceptible.  I do not see any disturbing artefacts or aliasing either.  Continuous recording, low crop factor and stable camera operation make me feel that I can continue using  the 1x3 anamorphic modes and MLVApp with confidence.   

Happy pixel peeping!

Solid approach. Thanks, IDA_ML.
I checked your testing photos. They are very convincing. To my eyes, at 100%, the difference is more like from the loss of codec, instead of from the much lower resolution of 33%. At original size, it is very difficult to see the difference between the two. Considering the movie frame was from a 24 fps movie,  so there was some motion blur, I'd say the two look the same to me. In other words, if the still is acquired in a sequence that can be combined as a video clip, it will look the same as the anamorphic one, because when the motion blur is applied (unavoidable), the difference of the two images will be masked totally by the motion blur.
At 100%, the viewing distance is like sitting at the foremost front walking way in a large movie theatre, even closer to the big screen than the first row. This is not realistic in normal life. I normally sit at the middle rows in big theatres to enjoy the embracing large screen experience. I notice that typically the front 5 to 10 rows are empty, as long as the theatre is not full. Audience tend to sit a little behind if they can choose seats.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 19, 2021, 12:52:53 AM
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jaggd edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:


Yeah, it thought it was you who posted the example that gave me confidence I wasn't mad. ;) In my case, I'm not really doing any tweaking---just MLV App defaults, and I have similar problems.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.

Interesting, I'll be sure to try that. I wonder if it has something do do with the order of operations, such that MLV App is sharpening before stretching, but if ILA_ML is getting good results (and given that ilia knows what he's doing), that seems unlikely.

The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

Ah, the inevitable sharpness--artefact tradeoff. I might be happier with a bit more softness in exchange for absence of artefacts, so I'll play around with these options. Do you happen to know what MLV App uses by default?

Since you're here and know things about the internals, any idea on the status of this, from #3?: "does one have the option of implementing (horizontal) line skipping instead of binning?"
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 19, 2021, 07:03:15 AM
At 100%, a few artefacts certainly appear: banding either side of vertical objects (lamp posts), zig zags which would otherwise be clean (tramlines), some jaggies (cable on roof in foreground), horizontal stretching in fence poles, general lack of definition in traffic signs, and the reds look rather muted (could be because of editing). But now we're in full-on pixel-peeping mode, belong in another forum, and probably won't ever film anything beyond a brick wall and a studio test scene.

If these are the imperfections that are bothering you, then, I would say, you are too demanding to your footage.  Nobody and nothing is perfect in this world but the question is, do these artefacts matter when watching 4k footage and are they perceptible at all?  I am not aware of anyone watching movies at 100% magnification.  As I mentioned here:

https://www.magiclantern.fm/forum/index.php?topic=26105.msg235661#msg235661

even my daughter is crazy about the amanorphic modes since she can pull selected high-quality stunning looking stills out of 5,7k anamorphic footage for Facebook.  Both the EF 85/1,4L IS and the EF 35/2 IS produce very beatiful portrait footage when used on a gimbal or even hand held and MLVApp produces the best skin tones that I have ever seen.

As far as your problem (trees with the sky behind them) is concerned, I am still very surprised about the artefacts that are visible even in full screen view.  Please try different apertures and maybe a different lens.  I have experienced aberations in out-of-focus areas on high-end lenses such as the 70-200/2,8 L IS.  Maybe, this is happening with your trees if your focus point is not on them.  Another reason could be motion blur, as mentioned by Mlrocks.  If the branches are moved by the wind, this can easily happen.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 19, 2021, 07:07:01 AM
Very interesting and useful discussion, by the way.  I hope, other experienced people will jump in too.
Title: Re: AI algorithms for debinning
Post by: Levas on July 19, 2021, 12:29:16 PM
Interesting discussion indeed, Ida.
Always interesting to see how others look at image quality and do their post process.

I start to believe that the anamorphic 5.7k raw on 5D3 is a true 5.7k raw with a compression ratio of 6 or above, as the horizontal binning has a compression ratio of 3 or above, and the 14-bit lossless LJ92 has a compression ratio of 1.5-2.5, depending on scene complexity and ISO level. The image quality may be "mediocre" comparing to the native uncompressed 6k raw, but should be much better than 1920x2340 if AR is 2.40.

Currently, Red Komodore has a 6K raw with compression ratio choices of 3 to 12, the same as the BMPCC 6K Pro. If implemented the same as the Canon R5, Canon 1DX3 and C500MK2 have 6k raw with a compression ratio of 6.
...

...
My bet is that all of these cameras will be similar in terms of image quality if not pixel peeping, as long as the operator is an expert on the camera and on the raw process to release the full potential of the system.

In terms of resolution, the 5d3 and other ML cameras will never win in anamorphic modes, on resolution/detail that is.
The other cameras don't use pixelbinning or lineskipping as technique to lower compression rates.
The biggest digfference between these professional video cameras and the 5d3 and other canon DSLR's is the readout speed of the sensor.
The ML cameras can't readout fast enough to read enough pixels for 4K.
For example, the 6d sensor can be read out at a speed of about 90 megapixel per second.
UHD 3840x2160 x 24(fps) = about 200 megapixel per second...so not possible on 6d (that's the reason why highest crop mode on 6d in 25 fps = 2880x1200 which is about 86 megapixel per second)

The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

But how these 6:1 compression ratio's work is a bit of a blackbox.
My bet is that most detail is cut off in dark(er) areas of the image.

Still also not sure how raw the BRAW of the blackmagic cameras is, or how raw the CRM files of the Canon 5dR are.
Blackmagic used to have raw cinema dng, which is as raw as raw gets.
But the more you read about BRAW, the more it sounds like a video codec, some say the image is already debayered in BRAW  ???
 
What is also a mystery to me is canon cinema raw light format (which is the only raw format in the Canon 5dR).
Cinema raw light has a high compression rate and there are no software tools available to extract a raw image sequence out of it.
Which is weird, if I buy a camera that shoots raw,  I'd like to be somehow able to extract an image sequence out of the files in a raw format,  like CR2, CR3 or DNG.
Doesn't matter if the CR3 or DNG's are lossy compressed, but I like to open them in different photo and video editors to compare different outputs.

That said ML raw has some pros against the 6:1 compression techniques.
ML raw gives you true 14 bit lossless raw compression. (or lower, like 10 bit if you want to, but still lossless compression)
So color detail and shadow area's are probably better compared against 6:1 compression techniques. 
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 19, 2021, 01:22:25 PM
Thank you, Levas.  Could you please explain to me what 6:1 compression techniques are?  Is this some kind of lossless RAW compression or is it something else?
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 19, 2021, 03:20:28 PM
Interesting discussion indeed, Ida.
Always interesting to see how others look at image quality and do their post process.

In terms of resolution, the 5d3 and other ML cameras will never win in anamorphic modes, on resolution/detail that is.
The other cameras don't use pixelbinning or lineskipping as technique to lower compression rates.
The biggest digfference between these professional video cameras and the 5d3 and other canon DSLR's is the readout speed of the sensor.
The ML cameras can't readout fast enough to read enough pixels for 4K.
For example, the 6d sensor can be read out at a speed of about 90 megapixel per second.
UHD 3840x2160 x 24(fps) = about 200 megapixel per second...so not possible on 6d (that's the reason why highest crop mode on 6d in 25 fps = 2880x1200 which is about 86 megapixel per second)

The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

But how these 6:1 compression ratio's work is a bit of a blackbox.
My bet is that most detail is cut off in dark(er) areas of the image.

Still also not sure how raw the BRAW of the blackmagic cameras is, or how raw the CRM files of the Canon 5dR are.
Blackmagic used to have raw cinema dng, which is as raw as raw gets.
But the more you read about BRAW, the more it sounds like a video codec, some say the image is already debayered in BRAW  ???
 
What is also a mystery to me is canon cinema raw light format (which is the only raw format in the Canon 5dR).
Cinema raw light has a high compression rate and there are no software tools available to extract a raw image sequence out of it.
Which is weird, if I buy a camera that shoots raw,  I'd like to be somehow able to extract an image sequence out of the files in a raw format,  like CR2, CR3 or DNG.
Doesn't matter if the CR3 or DNG's are lossy compressed, but I like to open them in different photo and video editors to compare different outputs.

That said ML raw has some pros against the 6:1 compression techniques.
ML raw gives you true 14 bit lossless raw compression. (or lower, like 10 bit if you want to, but still lossless compression)
So color detail and shadow area's are probably better compared against 6:1 compression techniques.


Very nice discussion.
It seems to me that 5D3's raw cannot handle darker areas well. I once accidentally underexposed my footage, after elevated the exposure in MLV App for about 3 stops, the noise made the footage not usable. Yet 5D3's low light performance is pretty good in photo mode. I used 10-bit 14-bit lossless compression, maybe this caused the loss of the shadow details?
Also I did not know why ML histogram emphasizes ETTR, as 5D3 is strong in low light but not so good at high light. It seems that ETTR is a must for all of the ML cameras to retain shadow details.
In the same analogy, probably all those cameras with a compression ratio of 6 or above will have similar issues on shadow details?
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 19, 2021, 03:33:51 PM
If these are the imperfections that are bothering you, then, I would say, you are too demanding to your footage.

They aren't. Sorry if I didn't make that clear, although I did try to. But if you say "happy pixel peeping", I'm going to pixel peep! :D

Just as others prefer the overall impression of the anamorphic mode at standard viewing distances, I prefer the overall impression of the crop modes. I think the eye is good at picking up on things that don't look right, even if it can't see the artefacts themselves. I think that's what's going on in my case, but I need to provide some examples (away at the moment). When I watch masc's anamorphic stuff, I think it looks fantastic, so as is usually the case, problems have solutions, even if we haven't identified them yet.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 19, 2021, 03:39:53 PM
They aren't. Sorry if I didn't make that clear, although I did try to. But if you say "happy pixel peeping", I'm going to pixel peep! :D

Just as others prefer the overall impression of the anamorphic mode at standard viewing distances, I prefer the overall impression of the crop modes. I think the eye is good at picking up on things that don't look right, even if it can't see the artefacts themselves. I think that's what's going on in my case, but I need to provide some examples (away at the moment). When I watch masc's anamorphic stuff, I think it looks fantastic, so as is usually the case, problems have solutions, even if we haven't identified them yet.

I agree with you. I think that if at the same resolution level, the crop pixel by pixel imaging is clean and looks detailed when enlarged to 100%. The anamorphic modes need special debinning process to get nice imaging at 100%. Probably this can be the future enhancement area and interesting topic for MLers.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 19, 2021, 04:20:37 PM
The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

At this point, attempting to push this limit, if that's even possible, doesn't so much lie in increasing resolution for its own sake; I'm sure most of us agree that resolutions above 2 or 3K give starkly diminishing returns in terms of the impression of quality, and 4K is plenty even for the big screen (since people adjust their angle of view, be it on a phone or in a 25m cinema). Steve Yedlin does a fantastic analysis of this claim:




Rather, it lies in decreasing the crop factor while maintaining the image quality advantages of 1x1 modes over 3x3 or 1x3 modes (there for me in the case of 1x3; perhaps less so for other people). Not only does it look marginally better at full screen, it is also more tolerant of cropping, which gets lots of use, especially in more experimental cinema. If there are ways around the shortcomings of 1x3 modes, I'll be thrilled to find them, but I personally am not there yet.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 19, 2021, 05:37:21 PM
At the current stage, the anomorphic imaging has little difference from the same resolution native imaging.

In practical video shooting, there is constant panning, zooming, tilting if the camera is mounted on a tripod, or constant complex moving if the camera is mounted on a steadicam or a gimbal or a pair of tracks or a drone or a chopper. This generates all kinds of motion blur, easily surpassing the little difference in image quality mentioned above.
After the video is finished, during the transporting from the production studio to the end user audience, there will be continuous loss during each step. The best route is the commercial movie theater, currently typically with 2k or 4k 12-bit projectors, the loss is the least by this route. As we discussed before, considering the viewing distance of the audience, the IQ difference will not be seen by them.
If the video is for broadcasting, either through cable or by wireless, the bandwidth is limited, the loss of the IQ will be much more than playing back in theaters. The audience will not see the IQ difference we found here.
If the video is for youtube etc, or for netflix/amazon prime streaming, the heavy codec and the limited bandwidth will eliminate the IQ difference we talk about here.
Personally, I found out that viewing the computer monitor in full screen mode is the most demanding way, because the viewing distance is much closer than watching in theater or watching TV. Even by this way, the quality of the UHD anamorphic imaging is very similar to 3k native imaging.
In summary, if we look at the whole picture of the content generating and all kinds of the routes to provide the video to the audience, the small IQ difference at the camera level is just a short tree in a forest of confounding factors.

On the other hand, the debinning process is far from perfect in ML anamorphic modes. There is still room to improve.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 19, 2021, 07:28:26 PM
"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

At this point, attempting to push this limit, if that's even possible, doesn't so much lie in increasing resolution for its own sake; I'm sure most of us agree that resolutions above 2 or 3K give starkly diminishing returns in terms of the impression of quality, and 4K is plenty even for the big screen (since people adjust their angle of view, be it on a phone or in a 25m cinema). Steve Yedlin does a fantastic analysis of this claim:




Rather, it lies in decreasing the crop factor while maintaining the image quality advantages of 1x1 modes over 3x3 or 1x3 modes (there for me in the case of 1x3; perhaps less so for other people). Not only does it look marginally better at full screen, it is also more tolerant of cropping, which gets lots of use, especially in more experimental cinema. If there are ways around the shortcomings of 1x3 modes, I'll be thrilled to find them, but I personally am not there yet.


There is a reason why Red One was so popular in 2008. The 4k raw recording actually was the standard to scan old super 35mm into digital intermedia (DI). There were a lot of intensive discussions on how much resolution a slide of super 35mm negative has and how to retain the most details into digital media. Now Hollywood is using 8k r/g/b raw to scan the super 35mm negatives, however, the gain may drastically diminish, as no one brags about the 8k single-channel/24k tri-channel scanning as much as 4k raw in 2008. It is generally considered that 4k raw is enough for the super 35mm resolution, higher resolution has little use due to the motion blur mentioned above. Just imaging this, can a photographer use a 30 mp high resolution camera to take a perfectly clear and detailed image without any blur when the camera is moving and when the subject is moving? If not, why bother with such a high resolution camera?
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 19, 2021, 10:33:59 PM
Can a photographer use a 30 mp high resolution camera to take a perfectly clear and detailed image without any blur when the camera is moving and when the subject is moving? If not, why bother with such a high resolution camera?

​I do agree with the thrust of your argument (diminishing returns). A couple of counterpoints:

I haven't looked into the extent to which motion blur annuls gains in resolution, but it's at least plausible that a streaking point looks better than a streaking blob by as much as a point looks better than a blob.​ By analogy in the world of stills, an astrophotographer capturing a star streak still cares about resolution.​ If we're talking about a truly Parkinsonian cameraman or Jason Bourne fight scene, it may be another matter.

Even if that isn't the case, we should probably be careful of overestimating the proportion of scenes affected by motion blur. In the experimental stuff I film​ and watch​, it's pretty low​. Elsewhere, scenes in which both foreground and background are both blurred are ​firmly ​in the minority (at least according to this viewer). Just having one element that is static is enough to lend the impression of overall detail to a scene, ​whence the value of sufficient resolution. Even brief moments of stillness in an otherwise movement-filled scene can give this impression.

​As for whether a "debinning" algorithm can produce gains ​qualitatively different from those of a scaling algorithm, I'll have to defer to someone more knowledgeable than I. Since the binning occurs at the analogue level (see the fantastic thread on pixel binning patterns in LiveView---https://www.magiclantern.fm/forum/index.php?topic=16516.0), you are presumably talking about some kind of rebayering, followed by a second debayering step. Whether or not this would (or could) be non-zero sum, I don't know.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 20, 2021, 12:12:33 AM
​I do agree with the thrust of your argument (diminishing returns). A couple of counterpoints:

I haven't looked into the extent to which motion blur annuls gains in resolution, but it's at least plausible that a streaking point looks better than a streaking blob by as much as a point looks better than a blob.​ By analogy in the world of stills, an astrophotographer capturing a star streak still cares about resolution.​ If we're talking about a truly Parkinsonian cameraman or Jason Bourne fight scene, it may be another matter.

Even if that isn't the case, we should probably be careful of overestimating the proportion of scenes affected by motion blur. In the experimental stuff I film​ and watch​, it's pretty low​. Elsewhere, scenes in which both foreground and background are both blurred are ​firmly ​in the minority (at least according to this viewer). Just having one element that is static is enough to lend the impression of overall detail to a scene, ​whence the value of sufficient resolution. Even brief moments of stillness in an otherwise movement-filled scene can give this impression.

​As for whether a "debinning" algorithm can produce gains ​qualitatively different from those of a scaling algorithm, I'll have to defer to someone more knowledgeable than I. Since the binning occurs at the analogue level (see the fantastic thread on pixel binning patterns in LiveView---https://www.magiclantern.fm/forum/index.php?topic=16516.0), you are presumably talking about some kind of rebayering, followed by a second debayering step. Whether or not this would (or could) be non-zero sum, I don't know.

The link to the binning/debinning topic is very nice. Thank you.
In terms of motion blur, actually it is getting really severe in Hollywood features. It is a major way that Hollywood is differentiating itself from the rest of the world. Nowadays Netflix and Amazon original shows are replacing the previous big network TV shows. These shows don't rely on action as much as Hollywood features do. But there is a trend that soap opera shows are following Hollywood features' style. On the other hand, TV shows do not require extreme high resolution cameras because the distribution channels are typical cable and internet which are limited in bandwidth. The main reason behind this trend is that the easy availability of the large sensor cameras and large aperture lenses at low budget. Hollywood and big networks are using all kinds of camera movements, green screen and CG, tons of lighting, good acting, and special sound effects to differentiate themselves from the Indi movie makers and youtube content providers.
If you go to some online camera tests, most of them don't do real scenario shooting, the subjects and the cameras are typically still. So that high resolution still benefits. For field shooting, when a lot of camera and subject actions are involved, resolution is not that important. This is why Arri Alexa series is getting popular in Hollywood. Alexa excels in dynamic range and color science especially in skin tones. Even in fast moving scenes, the benefits of these are easily seen.
Title: Re: AI algorithms for debinning
Post by: BatchGordon on July 20, 2021, 02:45:23 AM
About a better way to manage debinning... I had an idea for a custom debayering (some months ago) that could give us a bit more horizontal resolution, at the expense of some vertical resolution.
It could work but... the problem is that it needs the green pixels to be binned in a "shifted" way between lines, as shown in the following post:
https://www.magiclantern.fm/forum/index.php?topic=16516.msg210484#msg210484 (https://www.magiclantern.fm/forum/index.php?topic=16516.msg210484#msg210484)
as you can see the middle green pixel of the binning group of one line falls right between two groups of the following one (just forget the line skipping of the picture, it's not our case for 5k anamorphic). Substantially, it's a three pixels shift.

But... after some checking and tests I can confirm that, at least on the EOS-M and the latest ML version of Danne, the green pixels are binned with a single pixel shift between the line and the following one. So my idea cannot work on the current recorded material.
In case someone knows how to change the shift in binning (I played with registers with no success)... there could be a chance to test it, otherwise I think there's no way to improve the current quality.
Title: Re: AI algorithms for debinning
Post by: Levas on July 20, 2021, 11:23:31 AM
"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

The read out speed I'm talking about is literally the time it takes to read out the sensor.
You might think that the sensor readout is very fast, since you can take pictures of 1/4000th of a second, right ?
But when you're taking a picture, the sensor is actually exposed to light for about 0.2 seconds.  (the time it takes to read out the whole sensor in full resolution)
The shutter is the only reason you get your 1/4000th or even 1/60th of a second exposure.
The shutter closes and no more light comes onto the sensor, the sensor is recording light for the full 0.2 seconds.

A reasonable approach to estimate the max readout speed of most Canon DSLR's is the max burst speed per second, multiplied by the sensor resolution.
In case of the 6d, the advertised max burst speed is 4.5 frames per second. 4.5 x 20Megapixel = 90 Megapixel per second.
For 5d3 the advertised burst mode is 6 frames per second, 6 x 22Megapixel = 132 Megapixel per second.

So for pure UHD/4K resolution, sensor read out speed will be a bottleneck.

Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.
6d can do 2880x1200x25fps in crop mode, but not continuous in 14bit lossless raw  :-\
Title: Re: AI algorithms for debinning
Post by: Levas on July 20, 2021, 11:39:03 AM
Thank you, Levas.  Could you please explain to me what 6:1 compression techniques are?  Is this some kind of lossless RAW compression or is it something else?

It's about how much the data is compressed.
A video frame of 1920x1080x14bit = 29030400 bits, divided by 8 -> 3628800 bytes, divided by a million -> 3.628800 Megabyte.
So a non compressed 14 bit 1920x1080 frame is 3.6 Mbyte in size.
With 6:1 compression rate, your file size becomes 6 times smaller. (so instead of 3.6Mbyte it would become 0.6Mbyte)
In ML 14 bit raw is none compressed, so compression ratio is 1:1.
Then a few years ago, Alex found out about lossless LJ92 compression available in camera (The standard LJ92 compression, also used for lossless compression by adobe DNG converter).
This is a lossless compression which makes the filesize about 33% smaller, so ML lossless raw compression has actually a 1.5:1 compression ratio.

Other camera manufacturers of have options for 6:1 compression ratio, the 8K raw in the R5 is done with Canon cinema lite format which has an advertised compression ratio of 5:1.
Most of these are advertised as lossy compression formats and not lossless...but how the compression is done is in most cases kept a mystery by the manufacturers.
But there should be definately some color/detail loss along the way.
Title: Re: AI algorithms for debinning
Post by: Kharak on July 20, 2021, 12:36:51 PM
I think the LJ92 compression is closer to 40-50 % reduction, depending on brightness of scene.

The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

The R5 8k "raw" has less latitude compared to ML RAW, but that is not surprising with the amount of compression.
Title: Re: AI algorithms for debinning
Post by: Levas on July 20, 2021, 01:30:34 PM
Ah yes it's called LJ92 compression and not J92 compression and 5dr is ofcourse R5 (fixed it in my post  :P )

What you say about the R5 raw footage confirms my expectations.
The Canon cinema raw lite file format could be considered more like a new codec type recording option then a raw format recording option.

probably all those high ratio raw formats like 6:1 and 5:1 could be considered as new codec types to choose from (instead of H.264 or H.265).

Since there is no way to get a raw image sequence from the R5 CRM files, I wouldn't even be surprised if it isn't even intraframe compression (All-i) but some IPB compression on raw data.
Could be the case, since it's one big file, nobody knows what data is really in there  :P
 
Title: Re: AI algorithms for debinning
Post by: Levas on July 20, 2021, 01:41:59 PM
The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

Curious, which raw format are you talking about, looking at the 5R specs and it seems to have 3 options for raw.
Where the 2600Mbps option is called raw and not raw light  ???

8k Raw (29.97p/25.00p/24.00p/23.98p): Approx. 2600 Mbps
8k Raw (Light) (29.97p/25.00p): Approx. 1700 Mbps
8k Raw (Light) (24.00p/23.98p): Approx. 1350 Mbps

Recording is in 12-bit, so if I calculate correct non compressed should be about 10000 Mbps. (12-bit at 24fps)
So the highest options has a compression ratio of 4:1.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 20, 2021, 01:49:24 PM
Thanks a lot, Levas, for this explanation.  Am I wrong if I say that ML RAW video has the best quality in terms of data loss (zero) due to compression, among all codecs available to date?
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 20, 2021, 03:15:31 PM
The read out speed I'm talking about is literally the time it takes to read out the sensor.

Very interesting, thanks for the explanation.

Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?


Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.

If this is the case, why is it that the maximum observed speed of around 135 MB/s when card spanning is less than the sum of the max speeds to CF and SD cards (approx. 90 + 70 = 160 MB/s) when not card spanning?
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 20, 2021, 04:02:40 PM
Thanks a lot, Levas, for this explanation.  Am I wrong if I say that ML RAW video has the best quality in terms of data loss (zero) due to compression, among all codecs available to date?

Arri Raw is uncompressed unencrypted. Other than that, Sony and Red and Black Magic all have a compression ratio of 3 as the best option. I'd say that you are right that ML raw is the best in the industry.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 20, 2021, 04:08:10 PM
I think the LJ92 compression is closer to 40-50 % reduction, depending on brightness of scene.

The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

The R5 8k "raw" has less latitude compared to ML RAW, but that is not surprising with the amount of compression.

It will be interesting if you can do a test on the R5 8K raw with different compression ratios vs the 5D3 anamorphic 6K raw. See if the 5D3 anamorphic 6K raw can measure up to the R5 8K raw. Of course the best way is to compare BMPCC or Red Komodore 6k raw to the 5D3 anamorphic 6K raw, see how much the difference is.
Title: Re: AI algorithms for debinning
Post by: theBilalFakhouri on July 20, 2021, 04:48:05 PM
Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?

There is no 3584x1730 crop mode in 5D3, native crop is 3584x1320 1:1 @ 30 FPS, using crop_rec module there are 3072x1920 1:1 @ 24 and 3840x1536 1:1 @ 24 FPS, all of these = ~142 MP/s
In custom Danne build for 5D3 there is 3264x1836 1:1 @ 24 FPS = ~144 MP/s .

Fun fact: there is Full-Res LiveView mode which is 5784x3870 @ 7.4 FPS = ~166 MP/s . . there are other limits we need to figure out where it comes from, for example we can already do 3072x1920 @ 24 FPS in 1:1 mode, but we can't achieve 1920x3072 @ 24 FPS in 1x3 Binning (anamorphic) mode on 5D3, even if it the same read out speed . .

These limits are probably related to FPS timers (at certain value it would give corrupted image/freeze the camera even if we didn't hit the sensor speed limit, probably we need to tweak other registers related to FPS timers on 5D3 which we don't know it yet).

In high FPS, sensor speed becomes more limited e.g 1920x1040 3x3 @ 48 FPS = ~96 MP/s . . why? (again, maybe related to FPS timers or other related registers . . or we are missing some info about sensor speed and how it should be calculated).
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 20, 2021, 05:04:39 PM
Arri Raw is uncompressed unencrypted. Other than that, Sony and Red and Black Magic all have a compression ratio of 3 as the best option. I'd say that you are right that ML raw is the best in the industry. 

Well, if that is case, why nobody among the big manufacturers uses ML RAW?  How come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other developers financially to continue their efforts in further developing ML?

Just imagine a compact camera that can film true 4k 14-bit ML RAW 1:1 video with lossless compression, (583 MB/s are required for 4k@60 fps according to Levas's formula) and a mSATA SSD instead of a CF card to get 600 MBytes/s write speed for continuous recording and no overheating issues! For a company like Canon, developing a compact model with such an interface would be a piece of cake!  Why don't they do it? 
Title: Re: AI algorithms for debinning
Post by: names_are_hard on July 20, 2021, 05:16:16 PM
Quote
How come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other devepers financially to continue their efforts in further developing ML?

Hacking an existing cam to do a new raw mode is impressive, but if you're making new hardware it's simple.  Dumping raw frames to disk is about the simplest way you can do things.  And it's (probably) cheaper and more predictable to use your own full time staff, that you're likely already paying anyway.

They don't make this cam because it's expensive to hit the data rates to do full frame raw, and it's assumed there's no market for it at the price that would be required.  I'd guess they're right about that assumption, most people - even in the world of film making - simply don't need it.  You can get (very expensive) industrial / scientific cams that do work this way.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 20, 2021, 05:17:45 PM
Well, if that is case, why nobody among the big manufacturers uses ML RAW?  How come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other devepers financially to continue their efforts in further developing ML?

Just imagine a compact camera that can film true 4k 14-bit ML RAW 1:1 video with lossless compression, (583 MB/s are required for 4k@60 fps according to Levas's formula) and a mSATA SSD instead of a CF card to get 600 MBytes/s write speed for continuous recording and no overheating issues! For a company like Canon, developing a compact model with such an interface would be a piece of cake!  Why don't they do it?


5D3 and ML disrupted the whole mid to low cinema videocamera market. Only very few high end cinema cameras can compete with 5D3 ML raw. I remembered many years ago some one did a test on Alexa Classic, Red One, and 5D3 ML Raw 1080p 3x3. The conclusion was that 5D3 ML matched up very closely to Alexa Classic, only the highlight rolling off was less than Alexa Classic. But which camera on the earth can beat the highlight rolling off on Alexa then?

Even now with 5D3 ML, if a test confirms that 5D3 ML 6K anamorphic raw in general matches Red Komodore, BMPCC 6K Pro, this will go viral throughout the internet. My now "outdated ancient nobody-wants-it" less than 200-bucks 650D can do 4.5k raw, yet a brand new all acclaimed Canon C70 with a price tag of $6000 can only do 4k without raw?! Imaging an one hundred bucks second hand unknown EOS-M ML 5K raw can do what a fanboy Red Komodore does, yet with a pocket size, then who is the real fanboy, what does this mean to the camera manufacturers and to the video makers and to the Hollywood?!

All these above 4k and above 135 full frame games are for differentiation to justify extremely high profits in camera making industry. They have nothing to do with general audience's viewing experience. If Canon did not change the firmware code, and Alex can do the same on 5D4 R5, Sony Canon Arri Red will all be out of business in a few years or shrink the size significantly.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 20, 2021, 07:00:07 PM
There is no 3584x1730 crop mode in 5D3


It might be slightly modified in the latest build (don't have my camera with me to check), but it was there in Danne's September 2020 build (see post 619 here: https://www.magiclantern.fm/forum/index.php?topic=23041.600). Either way, I take your point that there seem to be different sensor readout limits in different modes, which is very interesting (I'm assuming you have a way of know that the limiting factor in each case is indeed the readout speed, and not something else).


we can already do 3072x1920 @ 24 FPS in 1:1 mode, but we can't achieve 1920x3072 @ 24 FPS in 1x3 Binning (anamorphic) mode on 5D3, even if it the same read out speed


Weird! If the binning is done at the analogue level, could this affect the readout speed?

What I'm still in the dark about is where the 135 MB/s card-spanning write-speed limit comes from. Is this another mystery?
Title: Re: AI algorithms for debinning
Post by: theBilalFakhouri on July 20, 2021, 09:37:14 PM
@Jonneh

Oh sorry, you are correct . . yes there is 3584x1730 @ 24 FPS in latest Danne build . .

Weird! If the binning is done at the analogue level, could this affect the readout speed?

Not the Binning modes exactly, but the FPS timers:

Timer A is directly related (https://www.magiclantern.fm/forum/index.php?topic=19300.msg196374#msg196374) to horizontal resolution. Timer B is directly related to vertical resolution. They are not the same, but if you increase one, you may also have to increase the FPS timers.

from:
https://www.magiclantern.fm/forum/index.php?topic=19300.msg197098#msg197098

In order to increase vertical resolution you need to increase FPS Timer B (increasing FPS Timer B decreases FPS) , I could do 1920x3072 1x3 @ ~20 FPS, but not in 24 FPS, in this case we need to lower FPS Timer B to get 24 FPS in 1920x3072 in 1x3 mode, but doing that broke the image and might freeze the camera . . it's weird because we didn't hit read out speed limit yet . . there are other *head* timers are related to FPS Timers, tweaking it are not enough . . maybe there are other registers needed to tweak . .

BTW there is no problem like that on 700D . .

What I'm still in the dark about is where the 135 MB/s card-spanning write-speed limit comes from. Is this another mystery?

In LiveView the camera uses more memory cycles resulting in lower write speeds, lowering framerate from "FPS override" helps . .

In my previous tests, maximum write speed with card spanning in PLAY mode is ~155 MB/s (5D3.123) using 160 MHz, 192 MHz and 240 MHz overclock presets, in LiveView the write speed decreases a bit due to more memory cycles are used which became ~135 MB/s write speed when the framerate @ 24 FPS . .

~155 MB/s write speed limit in PLAY mode is coming from the memory (RAM), so I think it's a memory speed limit here . . bypassing this memory speed limit somehow may increase card-spanning write speed in theory . .
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 21, 2021, 01:29:50 AM
I just did some tests on 650D in the mode of anamorphic UHD AR 2.4 14 bit /14 bit lossless 24 fps, for a complex scene, the data flow is calculated by ML to be around 55 MB/s. If 5D3 can bypass the fps speed limit control in high fps mode, with a current maximal writing speed of 135 MB/s, on 5D3, 60 fps slow motion of anamorphic UHD AR 2.4 14 bit /14 bit lossless 60 fps may be possible. On 5D3, UHD and DCI 4K use sensor sizes of about super 35mm 3-perf to 4-perf. So they are very "cinematic". It will be exciting that 5D3 can do UHD/DCI 4K raw 60 fps slow mo.
Title: Re: AI algorithms for debinning
Post by: Bender@arsch on July 21, 2021, 10:20:32 AM
Interesting discussion ;)

Btw, the maximum writespeed on the 5D Mark III is not 135mb/s. The edge is somewhere else.
In my Tests I got 138mb/s and I wrote this in the Forum here.
https://www.magiclantern.fm/forum/index.php?topic=23041.msg230881#msg230881

But later I reached 139mb/s, but never more.
I reached this with the 3.5K Preset in 3584x1730, 23,976fps at 10bit lossless (tweaking from 14bit lossless) and with cardspanning, sd overclocking 160mhz and kill global draw (5x crop preview).
I stopped manually;) and this is not a high peek Number, this is a average number -> I double checked this on the computer.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 21, 2021, 01:47:00 PM
In order to increase vertical resolution you need to increase FPS Timer B (increasing FPS Timer B decreases FPS) , I could do 1920x3072 1x3 @ ~20 FPS, but not in 24 FPS, in this case we need to lower FPS Timer B to get 24 FPS in 1920x3072 in 1x3 mode, but doing that broke the image and might freeze the camera . . it's weird because we didn't hit read out speed limit yet . . there are other *head* timers are related to FPS Timers, tweaking it are not enough . . maybe there are other registers needed to tweak . .


Fascinating how byzantine the gears and levers are that need to be moved to get a desired result. Proper detective work!

In LiveView the camera uses more memory cycles resulting in lower write speeds, lowering framerate from "FPS override" helps .

In my previous tests, maximum write speed with card spanning in PLAY mode is ~155 MB/s (5D3.123) using 160 MHz, 192 MHz and 240 MHz overclock presets, in LiveView the write speed decreases a bit due to more memory cycles are used which became ~135 MB/s write speed when the framerate @ 24 FPS . .

~155 MB/s write speed limit in PLAY mode is coming from the memory (RAM), so I think it's a memory speed limit here . . bypassing this memory speed limit somehow may increase card-spanning write speed in theory . .

I see---that makes things clearer. So the 155MB/s is a RAM bottleneck and the 135MB/s (or 139, per Bender's current record) is the same minus the LiveView overhead. I'm sure I'm missing something obvious here, but what is PLAY mode? As I know it, it's just for playback, and no writing occurs there.
Title: Re: AI algorithms for debinning
Post by: theBilalFakhouri on July 21, 2021, 04:17:40 PM
..
Btw, the maximum writespeed on the 5D Mark III is not 135mb/s. The edge is somewhere else.
In my Tests I got 138mb/s and I wrote this in the Forum here.
..

But later I reached 139mb/s, but never more.
...

Yeah, never said it's exactly 135 MB/s . . I always added the approx sign before the numbers "~135 MB/s" . . at some point I also got ~138 MB/s write speed, let's say it's around these numbers ~135 MB/s to ~139 MB/s write speed using card spanning (23.976 FPS)


I'm sure I'm missing something obvious here, but what is PLAY mode? As I know it, it's just for playback, and no writing occurs there.

We use PLAY mode for running cards benchmarks since in this mode there is no overhead happening by anything, so this mode gives us the highest CF/SD card controller speed.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 21, 2021, 05:06:37 PM
We use PLAY mode for running cards benchmarks since in this mode there is no overhead happening by anything, so this mode gives us the highest CF/SD card controller speed.

Gotcha---good to know!
Title: Re: AI algorithms for debinning
Post by: masc on July 21, 2021, 09:44:13 PM
I'll try to answer all the MLVApp questions...
In MLV App latest version, if a low contrast lens is used and sky is overcast, I use 81 contrast, 81 clarity (microcontrast), 81 chroma separation- sharpening.
I wonder if it has something do do with the order of operations, such that MLV App is sharpening before stretching, but if ILA_ML is getting good results (and given that ilia knows what he's doing), that seems unlikely.
Do you happen to know what MLV App uses by default?
Don't use the MLVApp sharpen sliders for anamorphic footage. Stretching is done after sharpening, so you'll get bad lines. Better sharpen after the export in your NLE. By default, MLVApp doesn't sharpen at all.

If MLV App has a transformation/debinning algorithm to reverse engineer the Canon binning process, or has an AI algorithm to guess right the original 3 pixels from this fat pixel, then the anamorphic UHD will be very close to the native UHD.
You can't undo binning. You can just use better and worse algorithms for unsqueezing: most applications use bilinear or bicubic for this job. MLVApp uses AVIR, which brings better stretching results.

I once accidentally underexposed my footage, after elevated the exposure in MLV App for about 3 stops, the noise made the footage not usable. Yet 5D3's low light performance is pretty good in photo mode.
The results should be the same between photo and video mode. And if you use the same applications for processing, you'll notice that. MLVApp brings very similar results to Adobe ACR - just ACR has a denoiser enabled by default (switch it off and it looks mostly identical). When using defaults, processing video in MLVApp and photo in ACR, you'll thing photo is better - but it isn't.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 21, 2021, 10:12:30 PM
I'll try to answer all the MLVApp questions...Don't use the MLVApp sharpen sliders for anamorphic footage. Stretching is done after sharpening, so you'll get bad lines. Better sharpen after the export in your NLE. By default, MLVApp doesn't sharpen at all.
You can't undo binning. You can just use better and worse algorithms for unsqueezing: most applications use bilinear or bicubic for this job. MLVApp uses AVIR, which brings better stretching results.
The results should be the same between photo and video mode. And if you use the same applications for processing, you'll notice that. MLVApp brings very similar results to Adobe ACR - just ACR has a denoiser enabled by default (switch it off and it looks mostly identical). When using defaults, processing video in MLVApp and photo in ACR, you'll thing photo is better - but it isn't.

Thank you very much for the instructions on MLV App, masc. It is very nice to know these tips.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 21, 2021, 10:25:27 PM
I just did a test on 5D3 in the following modes: 1x1 UHD, and 1x3 anamorphic UHD. All in 10-bit color depth, 14-bit lossless compression. Aspect ratio was set to 3 to make 1x1 UHD continuous. The scene was composed of side trees with dense green leaves, an apartment gate, a parking lot with cars. The focus length of the lens was about 28mm. I used MLV App to process the raw into prores 4444. The footage was viewed in Potplayer on my computer screen of 27 inch 1920x1080 in full screen mode. I can see that the details of leaves are noticeably better in 1x1 UHD mode than in 1x3 anamorphic UHD mode. For the apartment gate and the park lot and the cars, the difference is not noticeable.
I have a little concern here with this experiment. The 1x3 anamorphic UHD is not an independent mode on 5D3. It is in the anamorphic mode, but can choose the horizontal resolution of 1280. Not sure if this UHD submode is optimized enough. I tested on 650D anamorphic UHD which is an independent mode and the results for the same scene (different dates, maybe different time of the day) showed that tree leaves were pretty sharp. I will do more controlled tests on this aspect.
Right now, probably IDA_ML's experiment is more convincing. Hopefully others can do more tests on 1x1 versus 1x3 modes using different approaches, such as similar experiments on an Apple 5K Retina monitor, etc, so that the conclusion will be more solid.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 22, 2021, 06:00:40 PM
I can see that the details of leaves are noticeably better in 1x1 UHD mode than in 1x3 anamorphic UHD mode. For the apartment gate and the park lot and the cars, the difference is not noticeable.

I am wondering what is so special about the green leaves.  Is it the green color, maybe, that causes the trouble?
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 22, 2021, 09:14:50 PM
Don't use the MLVApp sharpen sliders for anamorphic footage. Stretching is done after sharpening, so you'll get bad lines. Better sharpen after the export in your NLE. By default, MLVApp doesn't sharpen at all.

Good to know. Does this order of precedence have to be the way it is? Intuitively, I would think that most operations, and not just sharpening, would be best done on the resized image, but I could be wrong about that. I don't typically need (like) to sharpen, so it's unlikely that I did when I noticed the artefacts, but I'll bear it in mind for when I do a comparison and troubleshooting. By "default", I was actually referring to the resizing algorithm---good to know it's AVIR.

I just did a test on 5D3 in the following modes: 1x1 UHD, and 1x3 anamorphic UHD.

Out of curiosity, do you have a DNG (or just a jpeg) of the anamorphic shot where you see the difference in quality in how the leaves are rendered? I'd be interested in seeing if our results are comparable. If not, any jaggies and colour artefacts, or just a general softness? And did you focus on the trees or somewhere else (depending on this distance, all may be in focus with a 28mm, so this might be moot)?

I'll do some tests of my own in a few days' time. I have the 100D with its own anamorphic mode to compare results

NB: Interesting reflections on the state of the industry vis-à-vis motion blur and resolutions in your last reply to me. Good to hear from someone following these trends---I'm just a hobbyist who doesn't watch series. I'm told I should. ;)

I am wondering what is so special about the green leaves.  Is it the green color, maybe, that causes the trouble?

I always assumed it was the high contrast of a backlit object combined with the typical intricacy of branches and leaves. I've seen similar problems on silhouetted trees, so I don't think it's the green, although it was a plausible guess!
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 22, 2021, 09:44:23 PM
I am wondering what is so special about the green leaves.  Is it the green color, maybe, that causes the trouble?

I guess that the dense details caused the problem. Most people use this foliage test for wide angle landscape challenge against the camera or the lens to check the weakness.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 22, 2021, 09:45:35 PM
Good to know. Does this order of precedence have to be the way it is? Intuitively, I would think that most operations, and not just sharpening, would be best done on the resized image, but I could be wrong about that. I don't typically need (like) to sharpen, so it's unlikely that I did when I noticed the artefacts, but I'll bear it in mind for when I do a comparison and troubleshooting. By "default", I was actually referring to the resizing algorithm---good to know it's AVIR.

Out of curiosity, do you have a DNG (or just a jpeg) of the anamorphic shot where you see the difference in quality in how the leaves are rendered? I'd be interested in seeing if our results are comparable. If not, any jaggies and colour artefacts, or just a general softness? And did you focus on the trees or somewhere else (depending on this distance, all may be in focus with a 28mm, so this might be moot)?

I'll do some tests of my own in a few days' time. I have the 100D with its own anamorphic mode to compare results. For my rough test, the leaves were noticeably softer in 1x3 than 1x1. I did not see significant chroma abberation.

I will do more rigorous test later. I need compare 650D's UHD modes with 5D3's UHD modes. It may take several weeks due to not enough free time. Stay tuned.

NB: Interesting reflections on the state of the industry vis-à-vis motion blur and resolutions in your last reply to me. Good to hear from someone following these trends---I'm just a hobbyist who doesn't watch series. I'm told I should. ;)

I always assumed it was the high contrast of a backlit object combined with the typical intricacy of branches and leaves. I've seen similar problems on silhouetted trees, so I don't think it's the green, although it was a plausible guess!

I did not have a dng shot for the scene to compare the video footage. I think IDA_ML already did that test.
Title: Re: AI algorithms for debinning
Post by: Jonneh on July 22, 2021, 09:48:56 PM
I did not have a dng shot for the scene to compare the video footage. I think IDA_ML already did that test.

Oh, I just meant a DNG corresponding to one frame of the video, not a CR2 raw file, but you may not have the file in that format. A jpeg screen capture would do, but not to worry otherwise.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 22, 2021, 09:53:56 PM
Oh, I just meant a DNG corresponding to one frame of the video, not a CR2 raw file, but you may not have the file in that format. A jpeg screen capture would do, but not to worry otherwise.

I did not do that. In this rough test, the footage in 1x3 was noticeably softer than 1x1. But it might be due to other reasons. In the near future I will do more controlled test to eliminate or at least reduce the confounding factors.
I think IDA_ML's approach and results should say enough that 1x3 is good enough. My original purpose of this rough test was to see if the footage in video mode would even reduce the difference.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 23, 2021, 05:46:06 AM
I just did an experiment. The scene was composed of side trees with dense green leaves, an apartment gate, a parking lot with cars. The evening sun lightened some part of the scene. There was strong wind. I used one lens on 5D3 and 650D. The cameras were set up on a tripod. The whole experiment was done within 30 minutes. The focus length of the lens was about 28mm. All of the parameters in MLV App were default (except one footage I will mention it later). I used MLV App to process the raw into prores 4444. The footage was viewed in Potplayer on my 27 inch computer monitor of 1920x1080 in full screen mode.


The following modes were tested:

5D3 1x1 UHD
5D3 1x3 anamorphic UHD 10 bit color depth
5D3 1x3 anamorphic UHD 14 bit color depth
5D3 1x3 5.7k anamorphic 10 bit color depth
5D3 1x3 5.7k anamorphic 14 bit color depth
650D 1x1 3K
650D 1x3 anamorphic UHD 10 bit color depth
650D 1x3 anamorphic UHD 14 bit color depth

The common acquisition parameters were 14-bit lossless compression. Aspect ratio was set to 3 to make all modes continuous. The acquisition period was 1 minutes for each take.


Here are my observations:

First, 10 bit color depth modes and 14 bit color depth modes were not different in terms of IQ, because the MLV App parameters were default.

Second, confirming my previous experiment, I can see that the details of leaves were noticeably better in 5D3 1x1 UHD mode than in 5D3 1x3 anamorphic UHD mode. For the apartment gate and the park lot and the cars, the difference was not noticeable. Some of the trees had wind shake, so the difference was not noticeable. Thus, motion blus will mask the difference.
However, the difference of 5D3 1x1 UHD mode and 5D3 1x3 anamorphic UHD mode was much greater than the difference of 650D 1x1 3K and 650D 1x3 anamorphic UHD. In addition, 5D3 1x3 anamorphic UHD 14 bit color depth footage meta data were not recognized in MLV App, so I had to manually change the stretching ratio to 0.33. I suppose that the 5D3 1x3 anamorphic UHD submode is not optimized. I recommend not using this mode, even for testing. It will be great to have a separate preset of 1x3 UHD 60 fps on 5D3.

Third, 5D3 1x3 5.7k anamorphic was as detailed as the 5D3 1x1 UHD, if not more. Actually watching these footages again, I think that 5D3 1x3 5.7k anamorphic was noticeably a little bit sharper than 5D3 1x1 UHD, not much though. Therefore, there is no advantage to use 5D3 1x1 UHD, considering that 5D3 1x3 5.7k anamorphic 14-bit color depth is continuous, except that 5D3 1x1 UHD has much shorter processing time in MLV App. In the future, a test on the details of 5D3 1x3 5.7k anamorphic vs 5D3 1x1 5.7k will be more proper. If the difference in IQ is not significant, as demonstrated on 650D, I would rate 5D3 as a 6K Raw camera. Red Komodo users, BMPCC 6K Pro users, Canon C500 MKII users are welcome to challenge this claim.

Forth, 650D 1x1 3K was the sharpest of all the modes tested, 650D 1x3 anamorphic UHD was very close to 650D 1x1 3K in terms of IQ details. The sharpness of these modes is consistent with my previous experiments on 650D, so I consider that it is true. I am confident that the general audience will not see the difference between the 650D 1x3 anamorphic UHD and the 650D 1x1 3K in many commercial theatres with 2K projectors, neither by watching cable TV, nor by watching internet streamed videos. Therefore, for myself, I will use 1x3 anamorphic UHD 14-bit color depth on 650D as the main mode. It has a data flow of 55 MB/s thus continuous on 650D. I would rate 650D as a 4K Raw camera. BMPCC 4K users, Canon C70 and/or C200 users are welcome to challenge this claim.

Fifth, I am surprised to see that 650D modes were noticeably sharper than 5D3's. More experiments are needed to verify if this is true.


Future directions:

All of the above observations are based on "unprocessed" neat footage. I will do PP for each footage to improve the IQ. I think that PP will mask or at least mitigate the difference observed.

More studies with different approaches to test 1x1 vs 1x3 modes by community members are helpful. I am curious to see if results from similar experiments viewing on a 5k monitor will be the same.

Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 23, 2021, 07:31:59 AM
Forth, 650D 1x1 3K was the sharpest of all the modes tested, 650D 1x3 anamorphic UHD was very close to 650D 1x1 3K in terms of IQ details. The sharpness of these modes is consistent with my previous experiments on 650D, so I consider that it is true. I am confident that the general audience will not see the difference between the 650D 1x3 anamorphic UHD and the 650D 1x1 3K in many commercial theatres with 2K projectors, neither by watching cable TV, nor by watching internet streamed videos. Therefore, for myself, I will use 1x3 anamorphic UHD 14-bit color depth on 650D as the main mode. It has a data flow of 55 MB/s thus continuous on 650D. I would rate 650D as a 4K Raw camera. BMPCC 4K users, Canon C70 and/or C200 users are welcome to challenge this claim.

This is exactly why the 1x3 UHD/24 fps/12bit lossless is my mode on the EOS-M too which, as far as I know, has the same sensor as the 650D.  I am glad to see that you came to that conclusion too.  Yes, processing time is longer in MLVApp but it is well worth it.  The results are way better compared to the 3x3 mode, that I was filming in before, especially in terms of aliasing, finest detail rendering and tone transition smoothness.  It simply looks much better on full screen, period.  What I usually do is let the laptop process all my footage overnight and when I wake up in the morning it is ready for video editing in Resolve.

And thank you for the extensive experiments, Mlrocks!  I am sure, after reading this thread, many people will fall in love with the 1x3 anamorphic mode on ML capable cameras.  And if Masc and Ilia could think of some magic to fix the "green leaves" issue in the 5D3, that would be absolutely fantastic!
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 23, 2021, 08:35:23 AM
This is exactly why the 1x3 UHD/24 fps/12bit lossless is my mode on the EOS-M too which, as far as I know, has the same sensor as the 650D.  I am glad to see that you came to that conclusion too.  Yes, processing time is longer in MLVApp but it is well worth it.  The results are way better compared to the 3x3 mode, that I was filming in before, especially in terms of aliasing, finest detail rendering and tone transition smoothness.  It simply looks much better on full screen, period.  What I usually do is let the laptop process all my footage overnight and when I wake up in the morning it is ready for video editing in Resolve.

And thank you for the extensive experiments, Mlrocks!  I am sure, after reading this thread, many people will fall in love with the 1x3 anamorphic mode on ML capable cameras.  And if Masc and Ilia could think of some magic to fix the "green leaves" issue in the 5D3, that would be absolutely fantastic!

Thank you for your enlightment on 1x3 anamorphic modes, IDA_ML. I can tell that you have extensive experience and have passion for 5D3 and ML. ML will be a landmark in the history of digital cinema camera. Cheers!
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 23, 2021, 06:54:09 PM
To be conservative, based on experimental data, I would confidently rate 5d3 ML as a full sensor vista vision 14-bit color depth 4K Raw cinema camera. 5D3 ML's 14-bit color depth 5.7K 1x3 anamorphic footage is as good as any top end 4K Raw cinema camera. ML's 14-bit lossless compression raw is equal to the uncompressed Arri Raw, better than Red Raw, BRaw, Sony Raw Lite, Canon Cinema Raw Lite, even at their best compression ratios. 5D3 ML's overheating issue can be solved by frequently turning off the camera during the takes. Considering the booting time of 5D3 ML is less than 10 seconds, instead of minutes on Red cameras, this is a practical measure. Also, two or more 5D3 ML can be used in turn to have time to cool down. Therefore, even in mission critical professional shooting environment, the overheating issue can be solved.

I would rate 70D ML as a super 35mm 10-bit color depth 4K Raw run-and-gone camera. 70D has a maximal writing speed of 80 MB/s, can be implemented with a preset of continuous 10-bit color depth 14-bit lossless compression 1x3 anamorphic 5.2K raw 24 p AR 2.4. 70D ML's 5.2K 1x3 footage may well be equal to Red and Black Magic cameras' 4K raw with a compression ratio of 12. With its excellent video AF and on-camera flip LCD screen, 70D ML can be used on steadicam or on drone, etc.

It is generally agreed that 4K Raw is good enough for theater size large screen. With several 5D3 and 70D cameras in the bag, with vision and passion, MLers are ready to rock out Hollywood features.

Good job, ML and MLers.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 24, 2021, 12:27:55 AM
To bust another million dollar myth to benefit MLers, Arri's Alexa and Amira series excel in high light rolling off that no other camera can match, which is the best selling point of Arri to charge 10 to 50 times more than other cameras like 5D3 ML. In real world shooting, cinematic setup typically is controlled environment, meaning that the lighting can be controlled. You can use lighting to reduce the DR. Therefore, you are buying $100k over 5D3 ML as an insurance, but not a guarantee that your final footage will be better when using Arri Alexa than when using 5D3 ML. On the contrary, the footage may be very close and intercuttable if the lighting is set up right.
Arri's Alexa and Amira series are best used for uncontrollable environment, when the lighting can not be controlled, such as news gathering. I think that Arri Amira may be the best news shooting camera. However, you will be happy that your backbone is not broken after a whole day of shoulder shooting with Arri Alexa or Amira. On the other hand, nobody appreciates that you are shooting news with Arri Cameras, including the news agency, the common TV audience. They can tell the difference, but they don't care. Because it is just one time piece.
The truth is that you are buying Arri Alexa for insurance, for that prestigious big top brow camera looking to impress your customers on site, not really for the cinematic or filmic look the internet is raving about. With the proper lighting setup, 5D3 ML can do the same as the Arri Alex and Amira do.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 24, 2021, 04:10:59 AM
Also, two or more 5D3 ML can be used in turn to have time to cool down. Therefore, even in mission critical professional shooting environment, the overheating issue can be solved.

If you watch this tutorial by Masc:

https://www.magiclantern.fm/forum/index.php?topic=25180.msg231798#msg231798

you will see that he is not using SD-overclocking at all and he gets continuous recording at the anamorphic 5,7k/12 bit lossless/24 fps/2,35 AR setting.  At my daughter's graduation party I used the 5D3 extensively all day long and its temperature never exceeded 56 deg. C.  In my experience, overheating is not really an issue when filming UHD 1x3 anamorphic on the 5D3 and SD-overclocking is not really necessary.

One more word about 10-bit recording.  As you have noticed there is not really a difference in image quality between 10 and 14 bit lossless.  The only issue that you may experience with 10 bits is in high-contrast scenes if you severely crush the darkest areas.  In that case you get these ugly brown-redish colors in the crushed areas, (more pronounced on the EOS-M than on the 5D3).  That is why I always try to expose to the right until zebras start appearing and then I dial the exposure back by some amount to protect the highlights.  I always use a VND filter for that to preserve the nice full-frame vision with the narrow depth of field.  In this case severe crushing of the dark areas is very unlikely in most filming situations.   

I have also found that Dual ISO does a great job in reducing noise and protecting the darks in high-contrast scenes such as sun sets with the sun in the frame and night videography at street lights.  The 5,7k anamorphic mode provides enough vertical resolution for that, so that very beautiful high image quality videos with a filmic look are possible.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 24, 2021, 06:52:53 AM
If you watch this tutorial by Masc:

https://www.magiclantern.fm/forum/index.php?topic=25180.msg231798#msg231798

you will see that he is not using SD-overclocking at all and he gets continuous recording at the anamorphic 5,7k/12 bit lossless/24 fps/2,35 AR setting.  At my daughter's graduation party I used the 5D3 extensively all day long and its temperature never exceeded 56 deg. C.  In my experience, overheating is not really an issue when filming UHD 1x3 anamorphic on the 5D3 and SD-overclocking is not really necessary.

One more word about 10-bit recording.  As you have noticed there is not really a difference in image quality between 10 and 14 bit lossless.  The only issue that you may experience with 10 bits is in high-contrast scenes if you severely crush the darkest areas.  In that case you get these ugly brown-redish colors in the crushed areas, (more pronounced on the EOS-M than on the 5D3).  That is why I always try to expose to the right until zebras start appearing and then I dial the exposure back by some amount to protect the highlights.  I always use a VND filter for that to preserve the nice full-frame vision with the narrow depth of field.  In this case severe crushing of the dark areas is very unlikely in most filming situations.   

I have also found that Dual ISO does a great job in reducing noise and protecting the darks in high-contrast scenes such as sun sets with the sun in the frame and night videography at street lights.  The 5,7k anamorphic mode provides enough vertical resolution for that, so that very beautiful high image quality videos with a filmic look are possible.

Thanks a lot for your great tips, IDA_ML. It is good to not do SD overclock if using 5D3 in professional settings, thus there is no overheating issue. I agree that 10-bit lossless is good enough for most cases. I have not tried dual ISO much due to the monitoring difficulties. I will try more. Regards,
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 24, 2021, 09:59:32 PM
I just did a quick test on my 5D3, with card spanning on and sd card overclocking off. The following modes are continuous for 1 minutes on a relative complex scene, then I manually turned off the recording. The data flow calculated by ML was about 110 MB/s for both modes for the scene. So overheating can be avoided totally.

1. 10 bit color depth, 14-bit lossless compression, 5.7k 1x3 anamorphic raw, AR 2.67, 24 p
2. 10 bit color depth, 14-bit lossless compression, centered 3.5k 1x1 raw, AR 2.67, 24 p

The centered 3.5K is good for specific reasons, because this mode has a super 35mm/APS-C image circle, so EF-S lenses and lenses designed for super 35mm can be used in this mode, like Sigma 18-35 f1.8 and Canon 10-18mm F5.6 IS. 3.5k on super 35mm is actually the most optimized resolution, more than this number other side effects may show up. Arri Alexa Classic uses a 3.4k super 35mm sensor, the latest Arri LF uses two of the same sensor, and the Arri 65 uses three of the same sensor. It seems that Arri considers 3.4k is the resolution limit for a super 35mm sensor size. And 5D3 has a 3.5k super 35mm sensor within.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 24, 2021, 10:02:20 PM
For the 650D, I did a quick test, with 4.5k 1x3, 10-bit color depth, 14-bit lossless compression, AR 2.67, 24 fps, the data rate calculated by ML was about 36 MB/s for a complex scene. So 5.2k 1x3, 10-bit color depth, 14-bit lossless compression, AR 2.67, 24 fps, seems continuous on 650D with SD card overclocking at 240 hz if such a preset is implemented. 5.2k 1x3 anamorphic will use the full APS-C/super 35mm sensor width, so there will be a lot of benefits. The resolution of this 5.2k 1x3 may be close or even indistinguishable to native 4k 1x1.
Title: Re: AI algorithms for debinning
Post by: theBilalFakhouri on July 25, 2021, 01:27:27 AM
..So 5.2k 1x3, 10-bit color depth, 14-bit lossless compression, AR 2.67, 24 fps, seems continuous on 650D with SD card overclocking at 240 hz if such a preset is implemented. 5.2k 1x3 anamorphic will use the full APS-C/super 35mm sensor width, so there will be a lot of benefits..

You can already have it in my 650D build (https://www.magiclantern.fm/forum/index.php?topic=25784.msg231049#msg231049) but without real-time correct preview, load crop_new module, restart the camera, go to "Crop mode V2" submenu --> "Choose preset..." --> Select "2.35:1 1x3" --> press "SET" button in LiveView to apply the preset (settings) if it didn't apply.

Now you should see 1736x2214 (2.35:1 AR) 1x3 in "Crop mode V2" also in "RAW video", go to "RAW video" submenu and choose 2.67: AR, you should get 1736x1954 @ 23.976 FPS (5208x1954 after stretching).

If you want 10-bitdepth in lossless, select 14-bit lossless from "RAW video" submenu, and use 10 bit-depth from "Crop mode V2", during my tests it was mostly continuous (depending on scene, global draw off), SD overclock @ 240 MHz.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 25, 2021, 05:51:28 AM
You can already have it in my 650D build (https://www.magiclantern.fm/forum/index.php?topic=25784.msg231049#msg231049) but without real-time correct preview, load crop_new module, restart the camera, go to "Crop mode V2" submenu --> "Choose preset..." --> Select "2.35:1 1x3" --> press "SET" button in LiveView to apply the preset (settings) if it didn't apply.

Now you should see 1736x2214 (2.35:1 AR) 1x3 in "Crop mode V2" also in "RAW video", go to "RAW video" submenu and choose 2.67: AR, you should get 1736x1954 @ 23.976 FPS (5208x1954 after stretching).

If you want 10-bitdepth in lossless, select 14-bit lossless from "RAW video" submenu, and use 10 bit-depth from "Crop mode V2", during my tests it was mostly continuous (depending on scene, global draw off), SD overclock @ 240 MHz.

Thank you very much, theBilalFakhouri. This is really great. You change a "nobody" 650D into a 4k raw run-n-gun camera. I downloaded your today's ML version. I tested this mode just now. When ISO was set to 100, even for a relative complex scene with dense foliage, it was continuous (recorded over 1 minute, manually stopped) in the mode of APS-C full width 5208x1954 1x3 anamorphic 24 fps 10-bit color depth 14-bit lossless compression AR 2.67. The data rate calculated by ML was 60-65 MB/s for the scene. The footage looked nice in MLV App. When the ISO increased to over 400, the recording stopped after several seconds. I changed the AR to 3, then it was continuous. Another option is to change the color depth to 8-bit, even though the pp overhead will sacrifice a little bit, but the AR can be maintained at 2.67. Another option is to turn off the global draw, just need to deal with the monitoring difficulties. Therefore, I rate 650D as a super 35mm 10-bit 4k raw camera at ISO 100 (during day time or under good lighting).

I just tested a night home scene with a kitchen table and a lamp with 650D. ISO was set to 6400, with global draw off, sound record off, sd card overclock 240 hz, both the modes of 5208x1954 1x3 anamorphic 24 fps 10-bit color depth 14-bit lossless compression AR 2.67 and 3k 1x1 24 fps 10-bit color depth 14-bit lossless compression AR 2.67 were continuous (recorded over 1 minute, manually stopped). The data rates calculated by ML were 65-70 MB/s for the scene. Therefore with global draw off, sound record off, sd card overclock 240 hz, 5208x1954 1x3 anamorphic 24 fps 10-bit color depth 14-bit lossless compression AR 2.67 recording can be continuous even at high ISO. If recording 5.2k mode at night in a forest, then changing to 8-bit color depth as the last resort, it will give another 10-20% bandwidth, enough to make the mode continuous.

A very rough first impression of the 5.2k 1x3 footage, the details were great, at the same level as the 1x1 3k footage. Controlled comparison between the two modes is needed though. There were pink frames, mostly at the beginning, some in the middle of the footage.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 25, 2021, 07:16:39 AM
Mlrocks,

Stay away from 8 bits!  This bit depth produces really ugly colors and artifacts in the dark areas, at least to my experience.  You will be much better off filming at 1x3 1280x2160/10bitLL/24fps where you get continuous recording and excellent image quality.  For landscape videography, I recommend 1x3 1736x2928/16fps.  On the EOS-M this setting produces stunning image quality and as long as you move the camera slowly, the 16 fps have negligible effect on the jerkiness of the footage.  With fast moving objects you get a nice motion blur.  It works with Dual ISO very well too.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 25, 2021, 03:18:18 PM
Mlrocks,

Stay away from 8 bits!  This bit depth produces really ugly colors and artifacts in the dark areas, at least to my experience.  You will be much better off filming at 1x3 1280x2160/10bitLL/24fps where you get continuous recording and excellent image quality.  For landscape videography, I recommend 1x3 1736x2928/16fps.  On the EOS-M this setting produces stunning image quality and as long as you move the camera slowly, the 16 fps have negligible effect on the jerkiness of the footage.  With fast moving objects you get a nice motion blur.  It works with Dual ISO very well too.

Thank you for the useful warning, IDA_ML. I will not use 8-bit color depth. As you mentioned, using lower resolution and/or lower frame rates for continuous recording are the better way to go. 5208 1X3 mode probably is more suitable for 70D due to its higher writing speed.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 25, 2021, 11:15:07 PM
I just went out for a short field shooting using 650D's 4.8k 1x3 mode. I got continuous recording for all of the scenes and green dot for most of the scenes in the mode of 4.8k 1x3 anamorphic 1600x1800 10-bit color depth 14-bit lossless compression 24 fps AR 2.67. I did long takes, from 1 minutes to 9 minutes. The data rate calculated by ML was around 50-55 MB/s. I have not seen any pink frame yet. The shootout was about 1 hour, almost non-stopping, the camera temperature was at about 33 celcius degree finally before I shut down the camera for packaging. So SD card overclocking at 240 hz may not cause overheating severely if the shooting time is less than 4 hours, sd overclocking overheating may appear an issue if shooting a 12-hour day session, but this can be mediated by using two or more cameras, by shutting down cameras during the takes.

Image quality wise, I used ETTR and tried to lower the exposure in MLV App, but if I lowered more than 1.5 stops, there was pink/purple color in the extreme highlight area. If I lowered the exposure to 1 or 1.5 stops, it was fine. There was focus dots, I used chroma smoothing 3x3 to clear them off. I only added contrast and clarity (local contrast?) and some color changes. No sharpening did not matter much, the image looked detailed and enough sharp, like the look of vintage German lenses of being not razor sharp but detailed. The footage looked from the MLV App was sensational, way better than the 1080p 1x1 footage I took at the same venue using 650D and 5D3 during last year.

On 650D, I will use 4.8k 1x3 AR 2.67 if for IQ, UHD 1x3 AR 16:9 if AR 16:9 is required, and 2.8k 1x1 if a super 16mm lens is mounted. On 5D3, I will use 5.7k 1x3 AR 2.67 for IQ, and centered 3.5k 1x1 for super 35mm/APS-C lenses. I hope that an UHD 1x3 60 fps mode can be implemented on 5D3 so that all of the modes on 5D3 and 650D are about 4k level. I think that I am pretty much covered by almost all of the situations using these modes. I will use 5D3 ML for the rest of my life. I don't envy those new "n k cameras".
Title: Re: AI algorithms for debinning
Post by: IDA_ML on July 26, 2021, 10:55:37 AM
I will use 5D3 ML for the rest of my life. I don't envy those new "n k cameras".

Well, I wouldn't say no to the continuous tracking DP autofocus, articulating screen and in camera stabilization but we are not going to see ML on the R5/R6 anytime soon :-))).
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 26, 2021, 02:45:50 PM
Well, I wouldn't say no to the continuous tracking DP autofocus, articulating screen and in camera stabilization but we are not going to see ML on the R5/R6 anytime soon :-))).

Yeh, these are really nice features. I am eyeing Olympus EM-1 series for these. EM1 3 has the best IBIS, a good articulating screen and a nice EVF, continuous AF is good also. Its on camera 4k codec and image quality is OK but not great. But its HDMI output onto Atomos recorder can be 4k raw, with a cropping factor of about 1.5 (super 16mm). Since Olympus changed ownership, not sure if it is going to have new models. If EM1 4 or EM1 5 reaches the current video features of GH6, then I will pull the trigger.
I really hope that ML can break into the new R5 series, that will be dream.
Title: Re: AI algorithms for debinning
Post by: Dmytro_ua on July 26, 2021, 04:52:38 PM
I really hope that ML can break into the new R5 series, that will be dream.

R5 is a great CAM even without ML, so if you can afford it - it's a great choice ;)
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 26, 2021, 04:57:02 PM
R5 is a great CAM even without ML, so if you can afford it - it's a great choice ;)

Certainly, R5 has 8k raw. But ML has waveform, false color, etc, only high end cinema cameras have these features. Price wise, R5 is reach $4000, there are a lot of good competitors, like BMPCC 6k pro, Caon C70, etc. These are real video cameras, they are more suitable for video shooting than R5. If today I spend $4000 for a camera specifically for video, I would go BMPCC 6k Pro with an EVF and a battery grip, plus several batteries, totally about $3500. This set up is far better than R5 in terms of video shooting. For photo only, get a 6D2 for full frame is well enough, or an M50 for APS-C. Without ML, 5D4 and R5 and 1DX3 are good but not that appealing.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 27, 2021, 03:55:03 AM
Canon EOS R5 Overheating Tests: The True Story
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 27, 2021, 09:16:30 AM
Canon EOS R5 Firmware v1.10 Overheating Tests and Review
Title: Re: AI algorithms for debinning
Post by: Dmytro_ua on July 27, 2021, 09:17:25 AM
Canon EOS R5 Overheating Tests: The True Story

It all depends on your shooting style and needs. A lot of ML users are video hobbyists. Overheating is not an issue for a lot of users.
R5 is also a great stills camera, FF, DP autofocus, IBIS, etc. Which is not any of the listed above dedicated video cameras. That's why it's not correct to compare the price.  ;)

If the overheating the only reason not to buy this cam - there are some 3rd-party modifications. It void your warranty, though
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 27, 2021, 09:20:17 AM
It all depends on your shooting style and needs. A lot of ML users are video hobbyists. Overheating is not an issue for a lot of users.
R5 is also a great stills camera, FF, DP autofocus, IBIS, etc. Which is not any of the listed above dedicated video cameras. That's why it's not correct to compare the price.  ;)

Seems overheating is Canon's trick to cripple the R5 at the firmware level. The updated firmware has improved the overheating issue a lot.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 27, 2021, 04:00:48 PM
If R5 2 or 3 has the EM1 3 level's IBIS and rolling shutter effects, I will buy it over EM1 4 or 5. Otherwise, EM1 4 or 5 is a better choice.
Title: Re: AI algorithms for debinning
Post by: 70MM13 on July 27, 2021, 04:15:25 PM
Hi everyone,
I was lurking and saw this discussion so I dusted off my 5d3 and shot this little clip using the anamorphic mode, converted to dng using mlvapp, and then stretched and graded in resolve.
I specifically wore the patterned shirt to stress test for moire, and it seems not bad, but if you look closely at my shorts you may notice vertical stripes that shouldn't be there!
But it is pretty subtle.
It's such a treat to have full frame recording.  It makes my 35mm T1.5 really look great!

Title: Re: AI algorithms for debinning
Post by: Dmytro_ua on July 27, 2021, 04:53:45 PM
If R5 2 or 3 has the EM1 3 level's IBIS and rolling shutter effects, I will buy it over EM1 4 or 5. Otherwise, EM1 4 or 5 is a better choice.

These are absolutely different animals.
Don't forget that R5 is FullFrame and Olympus is 4/3"
Bigger matrix will have more rolling shutter and less capable IBIS as it has to move a much bigger hardware. Though I've never used Olympus cameras and can say nothing about them.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 27, 2021, 09:56:19 PM
These are absolutely different animals.
Don't forget that R5 is FullFrame and Olympus is 4/3"
Bigger matrix will have more rolling shutter and less capable IBIS as it has to move a much bigger hardware. Though I've never used Olympus cameras and can say nothing about them.

I understand that 4/3 is despised in the photigs, but it is actually very close to 35mm, APS-C is almost super 35mm. I bought an EM5 about 6 years ago and am still keeping it. Its IBIS is really tripod like, as long as you keep using wide and normal lenses. It is just 4 stops IBIS, now EM1 3 is 7 stops IBIS. EM5 doesn't have firmware hack, so its video modes are totally outdated. EM5's photo functions are actually pretty nice and good enough for me even now. I am only using it for photo shooting, mounting vintage MF LTM lenses. I have a collection of vintage MF lenses, so 5D3 ML with no video AF does not bother me at all. I have a Zacuto EVF, so I am fine that 5D3 does not have a swivel LCD. Currently, all of the Canon, Sony, Pany, Pentax cameras don't have any serious IBIS when compared to EM1 3. Yet IBIS is the most important feature if going handheld video shooting.

Sometimes I just wonder about all of these going mirrorless hype. Yes, without mirror the camera can be made smaller, but physics limits the full frame lenses going smaller than 4/3 lenses. I think 4/3 is actually pretty suitable for handheld portable video shooting. With small lenses and cameras and without a tripod, making it very low profile. Unfortunately Olympus and Sony share many critical stock holders, so Olympus is not allowed to bust the Sony video camera market. Otherwise, Oly IBIS is another disruptive handheld revolution in the cinema world after the 5D2/5D3/ML's vista vision sensor size revolution.
Title: Re: AI algorithms for debinning
Post by: ArcziPL on July 28, 2021, 03:59:28 PM
Sometimes I just wonder about all of these going mirrorless hype.
For photography mirrorless means for me:
- much better AF: no FF/BF (it was pain in the ass on all my bodies and most of lenses equal to or faster than f/2.8 and I had plenty of them), face/eye-detection, uncomparable more "focus points" -> all in all much faster in use and much more precise AF
- much better auto exposure
- much better DoF control: I can finally see it in the viewfinder; DSLR's viewfinders are just crap, no matter if APS-C or FF
- histogram, additional other overlays, menu and photo review in viewfinder, no need to often switch between viewfinder and main LCD.

Until switching to mirrorless I took most of my shots in LV, just to overcome the above problems. Now I finally have the LV in viewfinder. No more surprises like "oops, I forgot I am in M and all photos are much-underexposed".
Drawbacks: for me none. Higher current consumption is not noticable for me.


What I consider a hype is fullframe. Especially when shooting FF with f/4 lenses. I see no benefit over APS-C.
Mirrorless APS-C or m4/3 is much smaller & ligher but allows same small depth of field (if desired) and amount of photons per photosite just by taking a faster lens. Taking an even faster lens on FF is not possible or not practicable because a) might not exist (there is e.g. no equivalent of Sigma 18-35 f/1.8 Art) b) DoF will be too small. When shooting f/1.4 with APS-C I consider DoF usually too narrow already. f/1.4 on FF is even worse. I don't see a FF+lens combo, which would give me any benefits over APS-C mirrorless with my Sigma 18-35 f/1.8 Art, Sigma 50 mm f/1.4 Art (+speedbooster being a 32mm f/1.0 lens or 50 mm f/1.4 FF equivalent) + a bunch of other lenses including stabilized 10-xx lenses, pocket-sized stabilized x-200mm, pancake 22mm f/2.0 etc.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 29, 2021, 07:09:22 AM
For photography mirrorless means for me:
- much better AF: no FF/BF (it was pain in the ass on all my bodies and most of lenses equal to or faster than f/2.8 and I had plenty of them), face/eye-detection, uncomparable more "focus points" -> all in all much faster in use and much more precise AF
- much better auto exposure
- much better DoF control: I can finally see it in the viewfinder; DSLR's viewfinders are just crap, no matter if APS-C or FF
- histogram, additional other overlays, menu and photo review in viewfinder, no need to often switch between viewfinder and main LCD.

Until switching to mirrorless I took most of my shots in LV, just to overcome the above problems. Now I finally have the LV in viewfinder. No more surprises like "oops, I forgot I am in M and all photos are much-underexposed".
Drawbacks: for me none. Higher current consumption is not noticable for me.


What I consider a hype is fullframe. Especially when shooting FF with f/4 lenses. I see no benefit over APS-C.
Mirrorless APS-C or m4/3 is much smaller & ligher but allows same small depth of field (if desired) and amount of photons per photosite just by taking a faster lens. Taking an even faster lens on FF is not possible or not practicable because a) might not exist (there is e.g. no equivalent of Sigma 18-35 f/1.8 Art) b) DoF will be too small. When shooting f/1.4 with APS-C I consider DoF usually too narrow already. f/1.4 on FF is even worse. I don't see a FF+lens combo, which would give me any benefits over APS-C mirrorless with my Sigma 18-35 f/1.8 Art, Sigma 50 mm f/1.4 Art (+speedbooster being a 32mm f/1.0 lens or 50 mm f/1.4 FF equivalent) + a bunch of other lenses including stabilized 10-xx lenses, pocket-sized stabilized x-200mm, pancake 22mm f/2.0 etc.

I agree that APS-C or 4/3 mirrorless systems are actually better than FF to get all of the benefits for going smaller and going lv.
Title: Re: AI algorithms for debinning
Post by: mlrocks on July 29, 2021, 07:15:37 AM
For photography mirrorless means for me:
- much better AF: no FF/BF (it was pain in the ass on all my bodies and most of lenses equal to or faster than f/2.8 and I had plenty of them), face/eye-detection, uncomparable more "focus points" -> all in all much faster in use and much more precise AF
- much better auto exposure
- much better DoF control: I can finally see it in the viewfinder; DSLR's viewfinders are just crap, no matter if APS-C or FF
- histogram, additional other overlays, menu and photo review in viewfinder, no need to often switch between viewfinder and main LCD.

Until switching to mirrorless I took most of my shots in LV, just to overcome the above problems. Now I finally have the LV in viewfinder. No more surprises like "oops, I forgot I am in M and all photos are much-underexposed".
Drawbacks: for me none. Higher current consumption is not noticable for me.


What I consider a hype is fullframe. Especially when shooting FF with f/4 lenses. I see no benefit over APS-C.
Mirrorless APS-C or m4/3 is much smaller & ligher but allows same small depth of field (if desired) and amount of photons per photosite just by taking a faster lens. Taking an even faster lens on FF is not possible or not practicable because a) might not exist (there is e.g. no equivalent of Sigma 18-35 f/1.8 Art) b) DoF will be too small. When shooting f/1.4 with APS-C I consider DoF usually too narrow already. f/1.4 on FF is even worse. I don't see a FF+lens combo, which would give me any benefits over APS-C mirrorless with my Sigma 18-35 f/1.8 Art, Sigma 50 mm f/1.4 Art (+speedbooster being a 32mm f/1.0 lens or 50 mm f/1.4 FF equivalent) + a bunch of other lenses including stabilized 10-xx lenses, pocket-sized stabilized x-200mm, pancake 22mm f/2.0 etc.

Like you, I used live view shooting on Canon DSLR before I got an EM5 with an evf. I agree with you on all of the benefits for LV shooting. However, there are advantages using an optical view finder. Hollywood high end cinema cameras used to have optical view finders for old school DPs. The main point of using an optical view finder is no lag if the subject is moving. I found out that another great benefit of the optical view finder for myself is that the optical view finder is very good to the eyesight, much protective than evf.
Title: Re: AI algorithms for debinning
Post by: mlrocks on August 16, 2021, 08:36:19 PM
Interestingly, 70D actually is 5.4k 5472 x 3648 in full resolution mode. This is almost like 5D3's 5.7k 5760 x 3840. Both of 70D and 5D3's full resolution 1x3 modes may not be equal to a 5.5k to 6k raw 1x1 camera, but should be very well equal to a 4k raw 1x1 camera.
https://www.dpreview.com/reviews/canon-eos-70d/2
70D has a maximal writing speed of 80 MB/s. In the future, there may be another 10 MB/s improvement if sd card overclocking is configured in an optimized way. Even with the 80 MB/s threshold, 70D can do continuous 5.4k 1x3 anamorphic AR 2.67 24 fps 10-bit color depth 14-bit lossless compression with a data rate of about 75 MB/s for a relative complex scene. The image quality may be comparable to RED One's 4k raw R3D at the highest compression ratio of 3.
70D's dual pixel live view AF is very good in video mode, when coupled with STM lenses.
https://www.dpreview.com/reviews/canon-eos-70d/13
https://www.dpreview.com/reviews/canon-eos-70d/12
Title: Re: AI algorithms for debinning
Post by: Audionut on August 23, 2021, 03:29:32 PM
I've briefly used a friends AR7, and the most striking thing for me was how easy it is to take an image with an exposure that in no way shape or form resembles that shown on the liveview or EVF. Surely there must be a setting to adjust that, but in any case, it was interesting being able to change the shutter 4 stops and have both the liveview and EVF show the exact same brightness.

With a regular old bouncy mirror VF, there's always the exposure meter. And while it too has it's quirks, it's never been 4 stops or more wrong! Anyway....

For the same DOF and FOV, f/4.0 on a FF is equivalent to f/2.5 on the crop. All else being reasonably equal, I'll take an f/4.0 lens (or same lens at f/4.0) over a f/2.5 lens, solely because of sharpness.

If DOF isn't an issue, then I'll take the 1 and 1/3rd stop more light hitting the sensor, without even blinking (https://en.wikipedia.org/wiki/Shot_noise), but that's just me.
Title: Re: AI algorithms for debinning
Post by: mlrocks on August 27, 2021, 06:38:05 PM
Just tested 650D's crop new v2 modules preset 1736x2214 1x3, with AR 2.67, the resolution is 1736x1954. It is 10-bit, 14-bit lossless, 24 fps. This preset uses the super35/aps-c full width. The benchmark for this mode is 60 MB/s for 128 GB below Sandisk Extreme Pro 170 MB/s sd cards. For a 256 GB below Sandisk Extreme Pro 170 MB/s sd card, the nominal benchmark is 45 MB/s, actually it can be continuous as long as the data flow is below 60 MB/s.

For a relative complex scene with some trees full of green leaves at ISO 100, the data flow is about 65 MB/s. 650D can record in this mode for 150 frames, that is more than 5 seconds, enough for typical uses. For extreme cases, a scene with some trees full of green leaves at ISO 1600, the data flow is peaked at 70-75 MB/s, 650D can record 70-80 frames, about 3 seconds. As long as it is 3 to 5 seconds, human eyes accept this as a movie.

If the benchmark in this mode can be optimized to 65 MB/s like what is in the UHD 1x3 preset, or even improved to 75-80 MB/s like in 70D, this mode can be robustly continuous, good for long takes like interviews.

650D's crop new v2 modules preset 1736x2214 1x3, AR 2.67, 1736x1954, 10-bit, 14-bit lossless, 24 fps, super35/aps-c full width, is 5.2k anamorphic 1x3. Its IQ is equal to 4k raw 1x1 cameras, at least with a high compression ratio of 12. I would rate 650D as a super 35mm 4k raw camera.

I did some field shots with AR 3, 1736x2214 1x3, 1736x1954, 10-bit, 14-bit lossless, 24 fps, super35/aps-c full width. It was continuous at ISO 100 for a typical street scene. The data flow was about 55-60 MB/s. The IQ is even better than the UHD 1x3 in full screen mode. The IQ is very cinematic like. AR 3 is actually very cinematic, although not conventional.

For pp, I used MLV App latest version. Although it is not recommended, I used MLV App's sharpening at 30-50 for 1x3 footage. The sharpness is good enough. I have not noticed artifacts in full screen mode. When it is over 70 in sharpening, the artifacts are noticeable. So MLV App is fine for me to do the whole pp, before I do editing in Blender's video editor. There is no need for DaVinci Resolve, unless high end cc is required.

Thanks theBilalFakhouri for his great work.
Title: Re: AI algorithms for debinning
Post by: mlrocks on August 30, 2021, 09:36:55 AM
I did some ISO tests on 650D comparing 1x3 UHD and 1x1 1920x1280 presets. At ISO 200-400 in 1x1 1920x1280 preset, the noise level was comparable to that at ISO 1600 in 1x3 UHD preset. There was about 2-3 stops difference.

Unless there is a specific reason, there is no reason to use smaller sensor presets.
Title: Re: AI algorithms for debinning
Post by: mlrocks on September 06, 2021, 03:31:36 AM
I did a stress test of landscape on 650D comparing 1x3 5.2k AR 3, 1x3 4.5k AR 2.67, and 1x3 UHD AR 16:9.
5.2k footage is significantly better than UHD, a tad better than 4.5k. 5.2k 1x3 is the preset with the highest IQ on 650D.
Hope on 650D a 5.2k preset can be created with a benchmark compatible with UHD, 68 MB/s, to make 1x3 5.2k AR 2.67 continuous.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on September 06, 2021, 07:59:01 AM
Mlrocks,

Did you try 1x3, AR 1,87, 1736x2928, 10 bit on the 650D?  On the EOS-M this preset provides continuous recording at 16 fps and the quality is fantastic!  I can use up to 90% sharpening in MLVApp without noticing any disturbing artefacts.   Perfect for wide angle landscape videography, even at 16 fps, as long as you don't move the camera too fast.
Title: Re: AI algorithms for debinning
Post by: mlrocks on September 06, 2021, 05:09:44 PM
Mlrocks,

Did you try 1x3, AR 1,87, 1736x2928, 10 bit on the 650D?  On the EOS-M this preset provides continuous recording at 16 fps and the quality is fantastic!  I can use up to 90% sharpening in MLVApp without noticing any disturbing artefacts.   Perfect for wide angle landscape videography, even at 16 fps, as long as you don't move the camera too fast.

Hello, IDA_ML:

On 650D, 1x3 5.2k mode (1736x2216), 10 bit is from the Crop New module. It has a benchmark of 60 MB/s, not as much as UHD 1x3's 68 MB/s. It also shows much more pink frames than the Crop module presets. I now realize that the higher the 1x3 resolution, the better IQ the preset has. I don't use 1x1 any more unless there is the lens restriction.

It is good to know that you can use MLV App sharpening to 90% without noticing artefacts for 1x3 footage. I agree with you that MLV App is actually good enough for 1x3 pp. No need for further software.
Title: Re: AI algorithms for debinning
Post by: IDA_ML on September 06, 2021, 08:04:14 PM
It is good to know that you can use MLV App sharpening to 90% without noticing artefacts for 1x3 footage. I agree with you that MLV App is actually good enough for 1x3 pp. No need for further software.

I do the postprocessing entirely in MLVApp and export to PR422LT.  These files are very light to play in Resolve Lite which I use just for cutting and editing.  In my experience this is the fastest and easiest pp workflow.  Works very well on my 7 years old laptop.  I am too lazy to use more complex and time consuming workflows.  This one yields quite satisfactory image quality too.
Title: Re: AI algorithms for debinning
Post by: mlrocks on September 10, 2021, 06:24:57 PM
Finally, on 650D, I settle down on UHD 1X3, AR 1.78, 10-bit color depth, 14-bit lossless compression, 24 fps. With Bilal's latest 8-22-2021 version, with EDMAC hack on, SD overclocking to 240Mhz,  the benchmark is about 67 MB/s. For a complex scene at ISO 1600, the mode has a reading of less than 60 MB/s. Thus, the mode is robustly continuous. This mode has real time colored correct framing preview, and it is compatible with external monitors for director/customer viewing. Using the zoom in and set botton, online real time colored magic zoom 2x is available even in recording. Hope in the future the set button can be assigned to auto ETTR, leaving only zoom in button for the magic zoom. Pushing the info button goes to the second user interface layout, here the shutter speed can be changed and viewed real time. Using top panel ISO button, ISO level can be set up. So manual ETTR is possible in this mode. Together with the articulate LCD and an optional LCDVF, this configuration makes 650D a state of art news/event camera. The image quality of UHD 1X3 is significantly better than 1920x1080 1X1, well good enough for youtube, TV, and smart phone. Even on desktop monitor in full screen mode, UHD 1X3 footage still holds up.

With a well made cheap SmallRig baseplate, and some carbon fiber rails, I can mount my Tascam DR60D audio recorder, providing two standard XLRs and one 3.5mm plug in. So professional level audio can be integrated into this setup. The rails system in this way has a small footprint and is light weight actually.

If continuous recording is not a requirement, 5.2K 1X3 and 3K 1X1 10-bit color depth, 14-bit lossless compression, 24 fps, AR 2.67, can be recorded at least 3 to 5 seconds, even in extreme cases like a complex scene at ISO 1600. 5.2K 1X3 is actually 4K raw 1x1 CR (compression ratio) 12 level IQ. 
Title: Re: AI algorithms for debinning
Post by: mlrocks on September 11, 2021, 02:35:30 AM
On 5D3, with the latest ML, external monitors will not give correct framing in higher resolution modes than 1080p. The correct frame preview has low resolution, difficult to focus. As Stef7 mentioned, this is the major caveat of 5D3 ML at the current stage. Other than this preview issue, 5D3 ML is really user friendly, with auto ETTR and 10x zoom in implemented in the 5.8k 1x3 mode.

Frequent changing between the modes freezes the camera on 5D3. To solve this issue, sometimes I take out the batteries, sometimes I copy the new ML files from computer to the CF card. So it is recommended that on 5D3 it is better to stick to one mode and don't change frequently. Update: Unloading the modules of adtg_gui and bench solves this issue. Changing modes now is smooth.

With card spanning and no sd overclocking, I use anamorphic 5.8k 1x3, AR 2.67, 24 fps, 10-bit color depth, 14-bit lossless. It is continuous for a relative complicated scene with trees and leaves at ISO 800 F5.6 or ISO 6400 F16. The reading is about 125 MB/s for the beginning several seconds showing orange color dot, then going green at about 85 MB/s. For super 35mm lenses, I use 3.5k 1x1 centered, AR 2.67, 24 fps, 10-bit color depth, 14-bit lossless. It is continuous for a relative complicated scene at ISO 800 F5.6 or ISO 6400 F16. The reading is about 135 MB/s for the beginning several seconds showing red dot, then going green or orange at about 95 MB/s. The noise at ISO 6400 in 3.5k 1x1 mode is much greater than that in 5.8k 1x3 mode, probably 2 stops difference, not useable to my eyes. ISO 6400 in 5.8k 1x3 mode is acceptable, noise is noticeable.

IQwise, 5.8k 1x3 is little bit better than 3.5k 1x1, in terms of details, not significant though. Both are already very good in full screen mode on a 1920x1080 monitor, if at ISO 100. My experiment showed that 5.8k 1x3 footage is on par with UHD 1x1's in terms of details, if not better. Overall IQ, 5.8k 1x3 is better than UHD 1x1, due to its larger sensor size.

With card spanning and sd overclocking to 160 MHz, I use anamorphic 5.8k 1x3, AR 2.4, 24 fps, 14-bit color depth, 14-bit lossless. It is continuous. I guess that I can also do continuous anamorphic 5.8k 1x3, AR 1.78, 24 fps, 10-bit color depth, 14-bit lossless. For super 35mm lenses, I use 3.5k 1x1 centered, AR 2.4, 24 fps, 10-bit color depth, 14-bit lossless. It is continuous. It seems that the improvement of sd overclocking is not as significant on 5D3 as on 650D at the current stage. I can live with AR 2.67.

Title: Re: AI algorithms for debinning
Post by: mlrocks on September 13, 2021, 03:47:18 AM
I did a stress test on 5D3 with SD overclocking at 160 MHz. The CF and SD cards both were Sandisk 170 MB/s 128 GB. I used anamorphic 5.8k 1x3, AR 2.67, 24 fps, 10-bit color depth, 14-bit lossless. The recording time lasted almost 40 minutes. The temperature sign was 26 C all of the time during recording. After the card was full, the recording stopped, then the temperature sign was 53 C. I checked on ML menu, the internal temperature was 49 C. The internal temperature reduced to 35 C 15 minutes after turning off the camera.

The CF card used was 120 GB, the SD card used was 60 GB. I did another field shooting the same day without SD overclocking activated in the same mode, the CF card used was 120 GB, the SD card used was 30 GB. I suppose that without SD overclocking, the CF writing speed is 80 MB/s, SD 20 MB/s; with SD overclocking, the CF writing speed is 80 MB/s, SD 40 MB/s. The remaining space sign of the ML user interface is actually SD card's, not CF card's.

Title: Re: AI algorithms for debinning
Post by: Chris19 on September 13, 2021, 09:03:40 AM
Dear ML Rocks

5.760 x 3.840 Pixel (3:2) 3.960 x 2.640 Pixel (3:2) 3.840 x 2.560 Pixel (3:2)Covid-192.880 x 1.920 Pixel (3:2) 2.880 x 1.920 Pixel (3:2) 1.920 x 1.280 Pixel (3:2) 720 x 480 Pixel (3:2)

I thought the correct definition definition is 5.7k?
Do you also use super 35 lenses. Somebody told  me, those lenses have an infinity that looks too soft?

Title: Re: AI algorithms for debinning
Post by: mlrocks on October 08, 2021, 07:04:27 AM
Dear ML Rocks

5.760 x 3.840 Pixel (3:2) 3.960 x 2.640 Pixel (3:2) 3.840 x 2.560 Pixel (3:2)Covid-192.880 x 1.920 Pixel (3:2) 2.880 x 1.920 Pixel (3:2) 1.920 x 1.280 Pixel (3:2) 720 x 480 Pixel (3:2)

I thought the correct definition definition is 5.7k?
Do you also use super 35 lenses. Somebody told  me, those lenses have an infinity that looks too soft?

It can be 5.7k or 5.8k, it is actually 5.75k. It is no big deal. 70D is 5.4k or 5.5k, no big deal.
I use EFS lenses. No problem for infinity.
It really depends on lenses. Old mf wide angle lenses may have issues on infinity. But Zeiss C/Y 21mm is pretty darn good on infinity. Leica R 180 3.4 is a spy lens. So it really depends.
There is a heresay that Zeiss super 16 standard prime and super 35 standard prime lenses are adapted from C/Y lenses, adding cinema lens mechanism. There is no reason that super 35 cinema lenses are optically different from photo mf lenses. The only major difference is the flange distance. Using Latex adapter for each lens, can really be accurate. If adapter is not accurate, it may cause the issue at infinity. Some MF cinema lenses like B4 lenses need back focusing, which is an art practice itself. You are supposed to be at pro level or make a lot of efforts to be able to handle them.
Title: Re: AI algorithms for debinning
Post by: mlrocks on October 08, 2021, 07:40:06 AM
My ideal ML camera setup is the following:
5D3, 5.8k 1x3, cinematic scenarios, thanks to Danne's ML, almost perfect enough for field work.
650D/700D, UHD 1x3, news/events, thanks to Bil's ML especially for the correct color preview and magic zoom v2.
70D, 5.5k 1x3, dual pixel af on with stm efs lenses, steadicam/gimbal, cinematic moving scenes.

I already have 5d3 and 650D, will have 70D when its ML is more mature.
Title: Re: AI algorithms for debinning
Post by: mlrocks on October 12, 2021, 05:21:36 AM
For a wide angle landscape/seascape scene, using MLV App to process the 5.7/5.8k 1x3 raw files and export them to 2k and 5.7k high quality h264 mp4 files, viewing them in full screen mode on a 27 inch 1920x1080 monitor, to my eyes, there is a loss of details in the 2k mp4 files than in the original resolution mp4 files, but the difference is not significant enough as eye catching. The file sizes of the 2k mp4 files are about 10 time less than those of the 5.7k mp4 files. It may be a good idea to have these oversampled high quality 2k mp4 files to save hard drive space, if 4k is not a requirement. Also, the processing time in MLV App is about 40% for the 2k output compared to that of the 5.7k output.