AI algorithms for debinning

Started by mlrocks, July 17, 2021, 02:58:18 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

mlrocks

I just did a test comparing three different modes on 650D: crop 1920x1080 24 fps, anamorphic UHD 24 fps, crop 3K 16 fps. The full resolution mode is not working, I tried but did not test. To my big surprise, the anamorphic UHD footages on my 27 inch computer monitor in full screen mode are much sharper than those of the crop 1080p, just not at the same level. The native resolution of the anamorphic UHD is 1280x1800, just about 20-30% more than that of the crop 1080p, yet looks way more delicious.  The anamorphic UHD looks very similar to the crop 3K 16 fps, almost at the same level.

This really surprises me, because I used to think that the anamorphic modes are just for simple uprezzing, the native resolution is the real thing. Seems that the sensor records all of the information, then Canon has a binning algorithm to combine 3 horizontal pixels to 1 fat pixel. If MLV App has a transformation/debinning algorithm to reverse engineer the Canon binning process, or has an AI algorithm to guess right the original 3 pixels from this fat pixel, then the anamorphic UHD will be very close to the native UHD. This way, the binning process is more like a compression process, with the proper codec, this is actually helping to fit in the maximal writing speed of the camera. After writing in the cards, the information can be read out again by the computer software. If this is true, then binning modes are better than the crop pixel by pixel modes, because there are more information recorded by the sensor and saved in raw files.


Jonneh

In the anamorphic mode you mention, there's horizontal binning, and the desqueezing algorithm (and it does matter which one you pick) then creates the missing pixels. If it looks better to you than the crop 1080p, that could be the reduced crop factor (unless you are compensating with a different focal length, but then other confounds may come into play, such as distortion or sharpness, probably both in favour of the anamorphic mode), or it could be because the final image has a higher vertical resolution. I read somewhere that the eye is more tolerant of reduced horizontal resolution, though I'm not sure if that's actually true.
5D3 / 100D

theBilalFakhouri

Quote from: Jonneh on July 18, 2021, 12:08:02 AM
There's no binning in the anamorphic mode you mention, just horizontal line skipping..

Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically, other ones like 650D skips vertical lines of pixels instead of binning resulting in more moire/aliasing in 1080p, by using *Anamorphic* 1x3 modes the camera bins pixels horizontally but instead if skipping vertical lines, we read all of them resulting in less moire/aliasing, more resolution and details.



@mlrocks

This topic has been discussed before, de-binning is impossible due to lost information while averaging (binning) the pixels.

There was a suggestion by a1ex some years ago which is enhancing de-squeezing algorithm for 1x3 (anamorphic) modes . . until now no one tried to work on it . . but we will never get 1:1 details from 1x3 footage.

If Topaz team added an option to de-squeeze real anamorphic footage (which upscales only horizontal video resolution) into Video Enhance AI, maybe this would give better looking 1x3 footage if we used the AI de-squeezing algorithm (just my guess, anyone tried to contact Topaz team :P).

Jonneh

Quote from: theBilalFakhouri on July 18, 2021, 12:38:40 AM
Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically,

I stand corrected, thanks! (Original post edited to avoid misinformation). Although I knew about the horizontal binning, and that only the 5D3 had 3x3, I thought the anamorphic mode used line skipping. Good to know how it actually works.

Quick thought, and I'm sure I'm showing my ignorance here, but does one have the option of implementing (horizontal) line skipping instead of binning? If so, I wonder what the effect on aliasing would be if one alternated (cycled) the skipped pixels from frame to frame, such that you skip lines 2 and 3 in frame 1, 3 and 1 in frame 2, and 1 and 2 in frame 3 before repeating the loop. I imagine the illusion of temporal averaging would remove jaggies. Not so sure about moiré, but that might be improved too. One might see a shimmering effect at approx. 8 fps, but that might just look like inoffensive luminance noise, as long as the line alternation is done in both dimensions (and even that might not be necessary). Lots of speculation here though.

This may be moot if one is stuck with the native behaviour, but can't one sample the whole sensor and then discard unwanted lines before committing to memory? Maybe not. As you can tell, I don't have the first idea of how it works.  ;)
5D3 / 100D

mlrocks

Quote from: theBilalFakhouri on July 18, 2021, 12:38:40 AM
Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically, other ones like 650D skips vertical lines of pixels instead of binning resulting in more moire/aliasing in 1080p, by using *Anamorphic* 1x3 modes the camera bins pixels horizontally but instead if skipping vertical lines, we read all of them resulting in less moire/aliasing, more resolution and details.



@mlrocks

This topic has been discussed before, de-binning is impossible due to lost information while averaging (binning) the pixels.

There was a suggestion by a1ex some years ago which is enhancing de-squeezing algorithm for 1x3 (anamorphic) modes . . until now no one tried to work on it . . but we will never get 1:1 details from 1x3 footage.

If Topaz team added an option to de-squeeze real anamorphic footage (which upscales only horizontal video resolution) into Video Enhance AI, maybe this would give better looking 1x3 footage if we used the AI de-squeezing algorithm (just my guess, anyone tried to contact Topaz team :P).

Thanks for the info. theBilalFakhouri. Hope Topaz Video Enhancer can do something about this. What was Alex's suggestions at the time?
If Canon raw has metadata for each pixel in its raw files, this may be helpful for restoration of the binned pixels.
Even though the 5.7k anamorphic raw is not as good as the 5.7k pixel by pixel raw, as long as the debinning is intelligent, the 5.7k anamorphic raw may be close in terms of image quality.

mlrocks

Quote from: Jonneh on July 18, 2021, 12:08:02 AM
In the anamorphic mode you mention, there's horizontal binning, and the desqueezing algorithm (and it does matter which one you pick) then creates the missing pixels. If it looks better to you than the crop 1080p, that could be the reduced crop factor (unless you are compensating with a different focal length, but then other confounds may come into play, such as distortion or sharpness, probably both in favour of the anamorphic mode), or it could be because the final image has a higher vertical resolution. I read somewhere that the eye is more tolerant of reduced horizontal resolution, though I'm not sure if that's actually true.

Hello, Jonneh:

I used the same lens at the same distance, with different modes in Canon 650D. The scene has some trees (about 50% of the scene) and a compartment gate and a parking lot with cars. I want to see how different modes handle scenes of a lot of details. I tested the close-up shots in different modes also, but the resolution difference is not significant.
Cropped 1920X1080 24p has a closer look than the anamorphic UHD, i.e., the anamorphic UHD has a wider view than the cropped 1080p. So the anamorphic UHD is under more "stress".
I was really surprised that the anamorphic UHD looks much sharper than the cropped 1080p. I understand that the anamorphic UHD uses larger sensor size, therefore, maybe better aesthetics. But I actually feel that the resolution of the anamorphic UHD is noticeably better than the cropped 1080p. The resolution of the anamorphic UHD looks close to the cropped 3k mode, yet the native resolution of the anamorphic UHD is just 30% more than the cropped 1080p. I really don't know why, just share here an observation that is against my previous assumption that the anamorphic modes do not help resolution.


Jonneh

Quote from: mlrocks on July 18, 2021, 05:04:39 AM
Hello, Jonneh:
I was really surprised that the anamorphic UHD looks much sharper than the cropped 1080p. I understand that the anamorphic UHD uses larger sensor size, therefore, maybe better aesthetics. But I actually feel that the resolution of the anamorphic UHD is noticeably better than the cropped 1080p. The resolution of the anamorphic UHD looks close to the cropped 3k mode, yet the native resolution of the anamorphic UHD is just 30% more than the cropped 1080p. I really don't know why, just share here an observation that is against my previous assumption that the anamorphic modes do not help resolution.

You say that the anamorphic mode is "under more stress", which I take to mean that a given object will be smaller in the field of view, but are you comparing objects side-by-side, such that you can judge the resolving power? Otherwise, what you're noticing is likely a general impression, and as you say in your reply to Bilal, part of this is probably the upscaling algorithm doing its thing. Since the interpolated pixels are created somewhat intelligently, although no information is gained in the strict sense, the illusion is maintained that it is. I doubt any metadata is stored relating to the prebinned pixels, but I could be wrong. After all, there's not much to represent other than the intensity of the RGB pixels, so at best you could sample the discarded pixels at a lower bit rate, but then the file would be significantly larger. What's the resolution of the 1080p mode on the 650D, out of interest?

Since you have some trees in your scene, are they against a sky? If so, have you compared that area at 100% zoom between both of the modes. I know the standard viewing experience isn't one of a pixel peeper, but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes (which I tended to use for landscapes, often with trees in the scene). I now use (on the 5D3), one of the higher res crop modes with a wider lens. Are you using MLV App?
5D3 / 100D

IDA_ML

Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).  The difference in size is not 30% but 3,3 times (330%), regardless of the method the anamorphic image was obtained.  Yes, you record 1280 pixels horizontally in that case, to keep the bandwidth low, but these get stretched by a factor of 3 in post to 3840 pixels which is your final horizontal image size.  Also your vertical resolution (1800 p) is much higher than the 1080 p in the cropped case.  It is quite clear that a 6,9 Mp image will look much more detailed than a 2,07 Mp one that has been cropped, i.e. cut out and contains only a fraction of the sensor area and then blown up to the full 27" screen size.


Jonneh

Quote from: IDA_ML on July 18, 2021, 01:35:24 PM
Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).

Given the wonders of scaling algorithms, this is the key point (if fully dumb, such that they simply triplicated binned pixels, we'd have to resort to increased vertical resolution more-than offsetting decreased horizontal resolution, or somesuch, as an explanation for the anamorphic mode looking better). A fairer test would be a 1920x1080 image upscaled to 3840x2160, or something similar, and comparing the output. That way, the advantages of the increased input vertical resolution (before upscaling) in the anamorphic mode are divorced from output resolution.
5D3 / 100D

Levas

Hard to compare crop mode against anamorphic mode, hard to get the same field of view in both modes  :P

But for me the biggest difference is that in crop mode, the footage can be tweaked in post like it was a normal raw photo image.
No moire, no alliasing, just pristine sharp pictures  8)
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jagged edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:
Quote from: Jonneh on July 18, 2021, 09:31:28 AM
but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes

Lately I'm trying to experiment a little with anamorphic mode, you can get it to look good, no jagged edges and such, but it won't be as sharp as crop mode.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.


ffmpeg -i M27-1724_frame_%06d.png -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4

The above works for an image sequence that is named M27-1724_frame_ ...and 6 digits... with png extension.
You could also stretch a movie file instead, of course one that isn't stretched yet:

ffmpeg -i Input.mp4 -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4


The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

mlrocks

Quote from: IDA_ML on July 18, 2021, 01:35:24 PM
Mlrocks,

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).  The difference in size is not 30% but 3,3 times (330%), regardless of the method the anamorphic immage was obtained.  Yes, you record 1280 pixels horizontally in that case, to keep the bandwidth low, but these get stretched by a factor of 3 in post to 3840 pixels which is your final horizontal image size.  Also your vertical resolution (1800 p) is much higher than the 1080 p in the cropped case.  It is quite clear that a 6,9 Mp image will look much more detailed than a 2,07 Mp one that has been cropped, i.e. cut out and contains only a fraction of the sensor area and then blown up to the full 27" screen size.


Hello, IDA_ML:

I agree with your analysis. I think the information is there in anamorphic modes, just need software skills to take it out. For pixel by pixel modes, the information is not there. I will use anamorphic modes more in the future.

mlrocks

Quote from: Levas on July 18, 2021, 03:26:03 PM
Hard to compare crop mode against anamorphic mode, hard to get the same field of view in both modes  :P

But for me the biggest difference is that in crop mode, the footage can be tweaked in post like it was a normal raw photo image.
No moire, no alliasing, just pristine sharp pictures  8)
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jagged edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:
Lately I'm trying to experiment a little with anamorphic mode, you can get it to look good, no jagged edges and such, but it won't be as sharp as crop mode.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.


ffmpeg -i M27-1724_frame_%06d.png -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4

The above works for an image sequence that is named M27-1724_frame_ ...and 6 digits... with png extension.
You could also stretch a movie file instead, of course one that isn't stretched yet:

ffmpeg -i Input.mp4 -filter:v "scale=3840:2160:sws_flags=gauss"  -c:v libx264 -r 25 -pix_fmt yuv420p -crf 17 Output.mp4


The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

Hello, Levas:

Have you tried the Rawtherapee + Topaz AI workflow? Do you think if this will work better?

Levas

Didn't test the workflow with Topaz (don't have it)

But I expect it to not give better results (if you're aiming to get less artifacts).
The Canon horizontal pixelbinning used in anamorphic modes is far unique in cinema/photo world.
The only ones using it are a bunch of ML enthousiast with old Canon cameras  :P

Topaz AI is made with normal images in mind, full pixel readout images, where no tricks like pixelbinning is used.
So it's not trained/made for images that are horizontal pixelbinned.
Therefore I don't expect it to give better results on anamorphic ML footage.


IDA_ML

Quote from: Jonneh on July 18, 2021, 09:31:28 AM
... but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes (which I tended to use for landscapes, often with trees in the scene). I now use (on the 5D3), one of the higher res crop modes with a wider lens. Are you using MLV App?

Frankly, I am quite surprised to hear that you are having all these problems with the anamorphic modes.  I film wide angle landscape videography on the EOS-M using the anamorphic 4k (1280x2160)/24fps/10bit lossless mode all the time and results are fantastic! If the scene is exposed to the right but I make sure, I do not blow up the highlights, I never get the artefacts that you are talking about.  In high-contrast scenes, I typically increase the exposure until zebras start to appear and then dial it down by 0,5 to 1 stops.  Using a high-quality lens and VND filters, as well as precise focusing are a must!  Are you sure, you are not getting chromatic aberations? My landscape lenses are the EF-S 11-18, EF-24/2,8 IS and the EF 35/F2 IS. 

As far as postprocessing is concerned, I use MLVApp and it does a hell of a job when processing anamorphic MLV files, especially the latest v. 1.13.  Please try it if you haven't done this yet!  The default settings are great and if you don't use extreme adjustments you will get very pleasing results.  Typically, I export to ProRes and do the video editing in Resolve where I add some sharpness to my taste if necessary.  This is fully enough to compensate for the slight anamorphic softness that Levas mentioned.  That's all.

All in all, the 5k anamorphic and 5,7k anamorphic are my filming modes on the EOS-M and the 5D3, respectively.  These modes are a little tricky to use but once you learn how to sqweeze the maximum image quality out of them, you will never go back to other modes.

Levas

The quality difference between crop and anamorphic is in the pixel peepers range.

But difference aside, even the anamorphic modes look a hell of a lot better then 4k/UHD clips from my phone   8)

Levas

By using magic lantern raw video for years, you get used to the quality of it.
Sometimes I shoot a 4k/UHD clip with my phone, just to see a different quality level of footage  :P

mlrocks

Quote from: Levas on July 18, 2021, 07:02:32 PM
The quality difference between crop and anamorphic is in the pixel peepers range.

But difference aside, even the anamorphic modes look a hell of a lot better then 4k/UHD clips from my phone   8)

That was what I guessed. When I view the whole scene from my computer's monitor at a viewing distance of about 1 foot, I don't see those zigzags. In MLV App latest version, if a low contrast lens is used and sky is overcast, I use 81 contrast, 81 clarity (microcontrast), 81 chroma separation- sharpening. I don't see those zigzags when using anamorphic modes. The results are stunning 3D like. It is like seeing the outside through an opened window with a frame but no glass.

mlrocks

I start to believe that the anamorphic 5.7k raw on 5D3 is a true 5.7k raw with a compression ratio of 6 or above, as the horizontal binning has a compression ratio of 3 or above, and the 14-bit lossless LJ92 has a compression ratio of 1.5-2.5, depending on scene complexity and ISO level. The image quality may be "mediocre" comparing to the native uncompressed 6k raw, but should be much better than 1920x2340 if AR is 2.40.

Currently, Red Komodore has a 6K raw with compression ratio choices of 3 to 12, the same as the BMPCC 6K Pro. If implemented the same as the Canon R5, Canon 1DX3 and C500MK2 have 6k raw with a compression ratio of 6.

It will be very interesting if someone who can access these cameras does a test on the following cameras:

1. 5D3, Vista Vision, ML Raw, 6k anamorphic, compression ratio of 6 or above;
2. 70D, Super 35mm, ML Raw, 5k anamorphic, compression ratio of 6 or above;
3. EOS-M, Super 35mm, ML Raw, 5k anamorphic, compression ratio of 6 or above;
4. Red Komodore, Super 35mm, Red Raw, 6k pixel by pixel, choose option of compression ratio of 6 or above;
5. BMPCC 6K Pro, Super 35mm, BRaw, 6k pixel by pixel, choose option of compression ratio of 6 or above;
6. Canon C500MK2, Vista Vision, Canon Raw, 6k pixel by pixel, compression ratio of 6;
7. Canon C200, Super 35mm, Canon Raw, 4k pixel by pixel, compression ratio of 6;
8. Canon 1DX3, Vista Vision, Canon Raw, 6k pixel by pixel, compression ratio of 6;
9. Canon R5, Vista Vision, Canon Raw, 8k pixel by pixel, compression ratio of 6;
10. Arri Alexa Mini LF, IMAX, Arri Raw, 4k pixel by pixel, uncompressed (no option for compression?);
11. Arri Alexa LF, IMAX, Arri Raw, 6k pixel by pixel, uncompressed (no option for compression?);
12. Sony Venice, Vista Vision, Sony Raw, 6k pixel by pixel, compression ratio of 6 for Raw Lite XOCN;

The results can be mind blowing.

My bet is that all of these cameras will be similar in terms of image quality if not pixel peeping, as long as the operator is an expert on the camera and on the raw process to release the full potential of the system.

mlrocks

Quote from: Levas on July 18, 2021, 03:59:19 PM
Didn't test the workflow with Topaz (don't have it)

But I expect it to not give better results (if you're aiming to get less artifacts).
The Canon horizontal pixelbinning used in anamorphic modes is far unique in cinema/photo world.
The only ones using it are a bunch of ML enthousiast with old Canon cameras  :P

Topaz AI is made with normal images in mind, full pixel readout images, where no tricks like pixelbinning is used.
So it's not trained/made for images that are horizontal pixelbinned.
Therefore I don't expect it to give better results on anamorphic ML footage.
This makes sense. Thanks for the explanation.

Jonneh

Quote from: IDA_ML on July 18, 2021, 04:52:58 PM
 
Frankly, I am quite surprised to hear that you are having all these problems with the anamorphic modes.  I film wide angle landscape videography on the EOS-M using the anamorphic 4k (1280x2160)/24fps/10bit lossless mode all the time and results are fantastic! If the scene is exposed to the right but I make sure, I do not blow up the highlights, I never get the artefacts that you are talking about.  In high-contrast scenes, I typically increase the exposure until zebras start to appear and then dial it down by 0,5 to 1 stops.  Using a high-quality lens and VND filters, as well as precise focusing are a must!  Are you sure, you are not getting chromatic aberations? My landscape lenses are the EF-S 11-18, EF-24/2,8 IS and the EF 35/F2 IS. 

As far as postprocessing is concerned, I use MLVApp and it does a hell of a job when processing anamorphic MLV files, especially the latest v. 1.13.  Please try it if you haven't done this yet!  The default settings are great and if you don't use extreme adjustments you will get very pleasing results.  Typically, I export to ProRes and do the video editing in Resolve where I add some sharpness to my taste if necessary.  This is fully enough to compensate for the slight anamorphic softness that Levas mentioned.  That's all.

All in all, the 5k anamorphic and 5,7k anamorphic are my filming modes on the EOS-M and the 5D3, respectively.  These modes are a little tricky to use but once you learn how to sqweeze the maximum image quality out of them, you will never go back to other modes.

Yeah, I did persevere for a while, but the same issues kept reappearing. Without pixel peeping, it's mostly trees against the sky that look bad---although there it's very obvious. Zooming in reveals other issues, but that would be OK if it weren't for the first set of problems. Trees matter! ;)

Most of my lenses are vintage, but the wide angle one is my sole posh modern, the 16-35 F4L IS: sharp as you like, and practically free of CAs. The artefacts are often like Levas' image, posted to show the same thing, but I know I've seen some funkier cyan and yellow mixed in:

https://www.magiclantern.fm/forum/proxy.php?request=https%3A%2F%2Flive.staticflickr.com%2F65535%2F50140581561_3e308f2153_o.png&hash=8fffef0a7c7210f835a48404f3f54511

No blown-out highlights---I also use zebras and then dial things back---and I focus precisely on my subject. If being slightly out of the plane of focus means that the trees will inevitably show artefacts, I'm likely to resort to crop mode for most shots, unless I absolutely need a wider field of view than the 16mm end of my lens affords. No NDs---for the dusky scenes I tend to be filming, I don't find I need them (while staying at or below the approximate diffraction point of my lens). I also use MLV App to postprocess (default settings), although it's true I haven't tried 1.13. It's a shame, because I love the stability of the mode and using the full 16mm of the above lens instead of having a 24mm equivalent with added perspective distortion.

Still, these issues cropped up while filming on a trip, such that I didn't have time to do a thorough diagnosis. I'll find the time to sit down an do one, and I'll post some DNGs in the meantime, once I'm reunited with my material.
5D3 / 100D

IDA_ML

To prove my statements that there is barely an image quality difference between the 5,7k 1x3 anamorphic footage on the 5D3 and the actual CR2 still image of the full frame sensor in that camera, I have just performed the following experiment:

1) I filmed a short clip in the 1392x2340 resolution at 24fps, 14bit lossless in the 5,7k anamorphic mode.  At this resolution, the crop factor is 1,38, so the final frame size is 4174x2340 pixels;
2) Then I shot a 27 MB still CR2 image of the same scene using the same lens;
3) Both the clip and the still image were opened in MLVApp, processed to my liking and then a frame grab from the clip was exported as a JPG.  The same was done with the CR2 image.
4) The CR2 JPEG was opened in Photoshop, slightly cropped to achieve nearly the same vision as the frame grab from the clip and then saved without further processing for comparison.
5) Both JPEGs were then compared in FastStone Image viewer. 

All results, including the original RAW files are ready for download here:

https://we.tl/t-D56tL8qor2

Comparisons at 100% magnification are also included for the pixel peepers.  The above link will be active for 7 days.

As you can see, there is barely a difference in image quality between both scenarios.  In fact, watching them on my 30" (2560x1440) monitor in full screen from about 1m distance, I could not tell which is which.  Even at 100% magnification, the differences are barely perceptible.  I do not see any disturbing artefacts or aliasing either.  Continuous recording, low crop factor and stable camera operation make me feel that I can continue using  the 1x3 anamorphic modes and MLVApp with confidence.   

Happy pixel peeping!

Jonneh

Nice comparison IDA_ML---thanks for that.

Indeed, viewing as you describe, I'd be hard-pushed to pass an A/B test too. At 100%, a few artefacts certainly appear: banding either side of vertical objects (lamp posts), zig zags which would otherwise be clean (tramlines), some jaggies (cable on roof in foreground), horizontal stretching in fence poles, general lack of definition in traffic signs, and the reds look rather muted (could be because of editing). But now we're in full-on pixel-peeping mode, belong in another forum, and probably won't ever film anything beyond a brick wall and a studio test scene.

The problems I describe are visible in the first scenario, but it's time I produced some images to back this claim up. I'll do a comparison once I'm back with my equipment, and we'll see if it's much ado about nothing or not.
5D3 / 100D

mlrocks

Quote from: IDA_ML on July 18, 2021, 11:13:16 PM
To prove my statements that there is barely an image quality difference between the 5,7k 1x3 anamorphic footage on the 5D3 and the actual CR2 still image of the full frame sensor in that camera, I have just performed the following experiment:

1) I filmed a short clip in the 1392x2340 resolution at 24fps, 14bit lossless in the 5,7k anamorphic mode.  At this resolution, the crop factor is 1,38, so the final frame size is 4174x2340 pixels;
2) Then I shot a 27 MB still CR2 image of the same scene using the same lens;
3) Both the clip and the still image were opened in MLVApp, processed to my liking and then a frame grab from the clip was exported as a JPG.  The same was done with the CR2 image.
4) The CR2 JPEG was opened in Photoshop, slightly cropped to achieve nearly the same vision as the frame grab from the clip and then saved without further processing for comparison.
5) Both JPEGs were then compared in FastStone Image viewer. 

All results, including the original RAW files are ready for download here:

https://we.tl/t-D56tL8qor2

Comparisons at 100% magnification are also included for the pixel peepers.  The above link will be active for 7 days.

As you can see, there is barely a difference in image quality between both scenarios.  In fact, watching them on my 30" (2560x1440) monitor in full screen from about 1m distance, I could not tell which is which.  Even at 100% magnification, the differences are barely perceptible.  I do not see any disturbing artefacts or aliasing either.  Continuous recording, low crop factor and stable camera operation make me feel that I can continue using  the 1x3 anamorphic modes and MLVApp with confidence.   

Happy pixel peeping!

Solid approach. Thanks, IDA_ML.
I checked your testing photos. They are very convincing. To my eyes, at 100%, the difference is more like from the loss of codec, instead of from the much lower resolution of 33%. At original size, it is very difficult to see the difference between the two. Considering the movie frame was from a 24 fps movie,  so there was some motion blur, I'd say the two look the same to me. In other words, if the still is acquired in a sequence that can be combined as a video clip, it will look the same as the anamorphic one, because when the motion blur is applied (unavoidable), the difference of the two images will be masked totally by the motion blur.
At 100%, the viewing distance is like sitting at the foremost front walking way in a large movie theatre, even closer to the big screen than the first row. This is not realistic in normal life. I normally sit at the middle rows in big theatres to enjoy the embracing large screen experience. I notice that typically the front 5 to 10 rows are empty, as long as the theatre is not full. Audience tend to sit a little behind if they can choose seats.

Jonneh

Quote from: Levas on July 18, 2021, 03:26:03 PM
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jaggd edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:


Yeah, it thought it was you who posted the example that gave me confidence I wasn't mad. ;) In my case, I'm not really doing any tweaking---just MLV App defaults, and I have similar problems.

Quote from: Levas on July 18, 2021, 03:26:03 PM
What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.

Interesting, I'll be sure to try that. I wonder if it has something do do with the order of operations, such that MLV App is sharpening before stretching, but if ILA_ML is getting good results (and given that ilia knows what he's doing), that seems unlikely.

Quote from: Levas on July 18, 2021, 03:26:03 PM
The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

Ah, the inevitable sharpness--artefact tradeoff. I might be happier with a bit more softness in exchange for absence of artefacts, so I'll play around with these options. Do you happen to know what MLV App uses by default?

Since you're here and know things about the internals, any idea on the status of this, from #3?: "does one have the option of implementing (horizontal) line skipping instead of binning?"
5D3 / 100D

IDA_ML

Quote from: Jonneh on July 19, 2021, 12:41:16 AM
At 100%, a few artefacts certainly appear: banding either side of vertical objects (lamp posts), zig zags which would otherwise be clean (tramlines), some jaggies (cable on roof in foreground), horizontal stretching in fence poles, general lack of definition in traffic signs, and the reds look rather muted (could be because of editing). But now we're in full-on pixel-peeping mode, belong in another forum, and probably won't ever film anything beyond a brick wall and a studio test scene.

If these are the imperfections that are bothering you, then, I would say, you are too demanding to your footage.  Nobody and nothing is perfect in this world but the question is, do these artefacts matter when watching 4k footage and are they perceptible at all?  I am not aware of anyone watching movies at 100% magnification.  As I mentioned here:

https://www.magiclantern.fm/forum/index.php?topic=26105.msg235661#msg235661

even my daughter is crazy about the amanorphic modes since she can pull selected high-quality stunning looking stills out of 5,7k anamorphic footage for Facebook.  Both the EF 85/1,4L IS and the EF 35/2 IS produce very beatiful portrait footage when used on a gimbal or even hand held and MLVApp produces the best skin tones that I have ever seen.

As far as your problem (trees with the sky behind them) is concerned, I am still very surprised about the artefacts that are visible even in full screen view.  Please try different apertures and maybe a different lens.  I have experienced aberations in out-of-focus areas on high-end lenses such as the 70-200/2,8 L IS.  Maybe, this is happening with your trees if your focus point is not on them.  Another reason could be motion blur, as mentioned by Mlrocks.  If the branches are moved by the wind, this can easily happen.