Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Jonneh

Pages: [1]
Raw Video / Re: AI algorithms for debinning
« on: July 22, 2021, 09:48:56 PM »
I did not have a dng shot for the scene to compare the video footage. I think IDA_ML already did that test.

Oh, I just meant a DNG corresponding to one frame of the video, not a CR2 raw file, but you may not have the file in that format. A jpeg screen capture would do, but not to worry otherwise.

Raw Video / Re: AI algorithms for debinning
« on: July 22, 2021, 09:14:50 PM »
Don't use the MLVApp sharpen sliders for anamorphic footage. Stretching is done after sharpening, so you'll get bad lines. Better sharpen after the export in your NLE. By default, MLVApp doesn't sharpen at all.

Good to know. Does this order of precedence have to be the way it is? Intuitively, I would think that most operations, and not just sharpening, would be best done on the resized image, but I could be wrong about that. I don't typically need (like) to sharpen, so it's unlikely that I did when I noticed the artefacts, but I'll bear it in mind for when I do a comparison and troubleshooting. By "default", I was actually referring to the resizing algorithm---good to know it's AVIR.

I just did a test on 5D3 in the following modes: 1x1 UHD, and 1x3 anamorphic UHD.

Out of curiosity, do you have a DNG (or just a jpeg) of the anamorphic shot where you see the difference in quality in how the leaves are rendered? I'd be interested in seeing if our results are comparable. If not, any jaggies and colour artefacts, or just a general softness? And did you focus on the trees or somewhere else (depending on this distance, all may be in focus with a 28mm, so this might be moot)?

I'll do some tests of my own in a few days' time. I have the 100D with its own anamorphic mode to compare results

NB: Interesting reflections on the state of the industry vis-à-vis motion blur and resolutions in your last reply to me. Good to hear from someone following these trends---I'm just a hobbyist who doesn't watch series. I'm told I should. ;)

I am wondering what is so special about the green leaves.  Is it the green color, maybe, that causes the trouble?

I always assumed it was the high contrast of a backlit object combined with the typical intricacy of branches and leaves. I've seen similar problems on silhouetted trees, so I don't think it's the green, although it was a plausible guess!

Raw Video / Re: AI algorithms for debinning
« on: July 21, 2021, 05:06:37 PM »
We use PLAY mode for running cards benchmarks since in this mode there is no overhead happening by anything, so this mode gives us the highest CF/SD card controller speed.

Gotcha---good to know!

Raw Video / Re: AI algorithms for debinning
« on: July 21, 2021, 01:47:00 PM »
In order to increase vertical resolution you need to increase FPS Timer B (increasing FPS Timer B decreases FPS) , I could do 1920x3072 1x3 @ ~20 FPS, but not in 24 FPS, in this case we need to lower FPS Timer B to get 24 FPS in 1920x3072 in 1x3 mode, but doing that broke the image and might freeze the camera . . it's weird because we didn't hit read out speed limit yet . . there are other *head* timers are related to FPS Timers, tweaking it are not enough . . maybe there are other registers needed to tweak . .

Fascinating how byzantine the gears and levers are that need to be moved to get a desired result. Proper detective work!

In LiveView the camera uses more memory cycles resulting in lower write speeds, lowering framerate from "FPS override" helps .

In my previous tests, maximum write speed with card spanning in PLAY mode is ~155 MB/s (5D3.123) using 160 MHz, 192 MHz and 240 MHz overclock presets, in LiveView the write speed decreases a bit due to more memory cycles are used which became ~135 MB/s write speed when the framerate @ 24 FPS . .

~155 MB/s write speed limit in PLAY mode is coming from the memory (RAM), so I think it's a memory speed limit here . . bypassing this memory speed limit somehow may increase card-spanning write speed in theory . .

I see---that makes things clearer. So the 155MB/s is a RAM bottleneck and the 135MB/s (or 139, per Bender's current record) is the same minus the LiveView overhead. I'm sure I'm missing something obvious here, but what is PLAY mode? As I know it, it's just for playback, and no writing occurs there.

Raw Video / Re: AI algorithms for debinning
« on: July 20, 2021, 07:00:07 PM »
There is no 3584x1730 crop mode in 5D3

It might be slightly modified in the latest build (don't have my camera with me to check), but it was there in Danne's September 2020 build (see post 619 here: Either way, I take your point that there seem to be different sensor readout limits in different modes, which is very interesting (I'm assuming you have a way of know that the limiting factor in each case is indeed the readout speed, and not something else).

we can already do 3072x1920 @ 24 FPS in 1:1 mode, but we can't achieve 1920x3072 @ 24 FPS in 1x3 Binning (anamorphic) mode on 5D3, even if it the same read out speed

Weird! If the binning is done at the analogue level, could this affect the readout speed?

What I'm still in the dark about is where the 135 MB/s card-spanning write-speed limit comes from. Is this another mystery?

Raw Video / Re: AI algorithms for debinning
« on: July 20, 2021, 03:15:31 PM »
The read out speed I'm talking about is literally the time it takes to read out the sensor.

Very interesting, thanks for the explanation.

Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?

Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.

If this is the case, why is it that the maximum observed speed of around 135 MB/s when card spanning is less than the sum of the max speeds to CF and SD cards (approx. 90 + 70 = 160 MB/s) when not card spanning?

Raw Video / Re: AI algorithms for debinning
« on: July 19, 2021, 10:33:59 PM »
Can a photographer use a 30 mp high resolution camera to take a perfectly clear and detailed image without any blur when the camera is moving and when the subject is moving? If not, why bother with such a high resolution camera?

​I do agree with the thrust of your argument (diminishing returns). A couple of counterpoints:

I haven't looked into the extent to which motion blur annuls gains in resolution, but it's at least plausible that a streaking point looks better than a streaking blob by as much as a point looks better than a blob.​ By analogy in the world of stills, an astrophotographer capturing a star streak still cares about resolution.​ If we're talking about a truly Parkinsonian cameraman or Jason Bourne fight scene, it may be another matter.

Even if that isn't the case, we should probably be careful of overestimating the proportion of scenes affected by motion blur. In the experimental stuff I film​ and watch​, it's pretty low​. Elsewhere, scenes in which both foreground and background are both blurred are ​firmly ​in the minority (at least according to this viewer). Just having one element that is static is enough to lend the impression of overall detail to a scene, ​whence the value of sufficient resolution. Even brief moments of stillness in an otherwise movement-filled scene can give this impression.

​As for whether a "debinning" algorithm can produce gains ​qualitatively different from those of a scaling algorithm, I'll have to defer to someone more knowledgeable than I. Since the binning occurs at the analogue level (see the fantastic thread on pixel binning patterns in LiveView---, you are presumably talking about some kind of rebayering, followed by a second debayering step. Whether or not this would (or could) be non-zero sum, I don't know.

Raw Video / Re: AI algorithms for debinning
« on: July 19, 2021, 04:20:37 PM »
The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

At this point, attempting to push this limit, if that's even possible, doesn't so much lie in increasing resolution for its own sake; I'm sure most of us agree that resolutions above 2 or 3K give starkly diminishing returns in terms of the impression of quality, and 4K is plenty even for the big screen (since people adjust their angle of view, be it on a phone or in a 25m cinema). Steve Yedlin does a fantastic analysis of this claim:

Rather, it lies in decreasing the crop factor while maintaining the image quality advantages of 1x1 modes over 3x3 or 1x3 modes (there for me in the case of 1x3; perhaps less so for other people). Not only does it look marginally better at full screen, it is also more tolerant of cropping, which gets lots of use, especially in more experimental cinema. If there are ways around the shortcomings of 1x3 modes, I'll be thrilled to find them, but I personally am not there yet.

Raw Video / Re: AI algorithms for debinning
« on: July 19, 2021, 03:33:51 PM »
If these are the imperfections that are bothering you, then, I would say, you are too demanding to your footage.

They aren't. Sorry if I didn't make that clear, although I did try to. But if you say "happy pixel peeping", I'm going to pixel peep! :D

Just as others prefer the overall impression of the anamorphic mode at standard viewing distances, I prefer the overall impression of the crop modes. I think the eye is good at picking up on things that don't look right, even if it can't see the artefacts themselves. I think that's what's going on in my case, but I need to provide some examples (away at the moment). When I watch masc's anamorphic stuff, I think it looks fantastic, so as is usually the case, problems have solutions, even if we haven't identified them yet.

Raw Video / Re: AI algorithms for debinning
« on: July 19, 2021, 12:52:53 AM »
In anamorphic mode, you have to be careful when tweaking your footage in post, too much (local) contrast or sharpness and you get jaggd edges.
Anamorphic is softer compared to crop to begin with, but not much you can do about it, trying to get some detail back and you'll get this:

Yeah, it thought it was you who posted the example that gave me confidence I wasn't mad. ;) In my case, I'm not really doing any tweaking---just MLV App defaults, and I have similar problems.

What works best for me is to export the MLV as a DNG image sequence, do some standard processing in RawTherapee and export as PNG files.
So after I have my PNG image sequence I use FFmpeg to make a correct aspect ratio movie file out of it.

Interesting, I'll be sure to try that. I wonder if it has something do do with the order of operations, such that MLV App is sharpening before stretching, but if ILA_ML is getting good results (and given that ilia knows what he's doing), that seems unlikely.

The "gauss" option in scaling works best to avoid jagged edges and weird color artifacts when unstrecthing.
FFmpeg has many more options for scaling, for example lanczos, but those are a little sharper and introduce artifacts.

Ah, the inevitable sharpness--artefact tradeoff. I might be happier with a bit more softness in exchange for absence of artefacts, so I'll play around with these options. Do you happen to know what MLV App uses by default?

Since you're here and know things about the internals, any idea on the status of this, from #3?: "does one have the option of implementing (horizontal) line skipping instead of binning?"

Raw Video / Re: AI algorithms for debinning
« on: July 19, 2021, 12:41:16 AM »
Nice comparison IDA_ML---thanks for that.

Indeed, viewing as you describe, I'd be hard-pushed to pass an A/B test too. At 100%, a few artefacts certainly appear: banding either side of vertical objects (lamp posts), zig zags which would otherwise be clean (tramlines), some jaggies (cable on roof in foreground), horizontal stretching in fence poles, general lack of definition in traffic signs, and the reds look rather muted (could be because of editing). But now we're in full-on pixel-peeping mode, belong in another forum, and probably won't ever film anything beyond a brick wall and a studio test scene.

The problems I describe are visible in the first scenario, but it's time I produced some images to back this claim up. I'll do a comparison once I'm back with my equipment, and we'll see if it's much ado about nothing or not.

Raw Video / Re: AI algorithms for debinning
« on: July 18, 2021, 11:12:34 PM »
Frankly, I am quite surprised to hear that you are having all these problems with the anamorphic modes.  I film wide angle landscape videography on the EOS-M using the anamorphic 4k (1280x2160)/24fps/10bit lossless mode all the time and results are fantastic! If the scene is exposed to the right but I make sure, I do not blow up the highlights, I never get the artefacts that you are talking about.  In high-contrast scenes, I typically increase the exposure until zebras start to appear and then dial it down by 0,5 to 1 stops.  Using a high-quality lens and VND filters, as well as precise focusing are a must!  Are you sure, you are not getting chromatic aberations? My landscape lenses are the EF-S 11-18, EF-24/2,8 IS and the EF 35/F2 IS. 

As far as postprocessing is concerned, I use MLVApp and it does a hell of a job when processing anamorphic MLV files, especially the latest v. 1.13.  Please try it if you haven't done this yet!  The default settings are great and if you don't use extreme adjustments you will get very pleasing results.  Typically, I export to ProRes and do the video editing in Resolve where I add some sharpness to my taste if necessary.  This is fully enough to compensate for the slight anamorphic softness that Levas mentioned.  That's all.

All in all, the 5k anamorphic and 5,7k anamorphic are my filming modes on the EOS-M and the 5D3, respectively.  These modes are a little tricky to use but once you learn how to sqweeze the maximum image quality out of them, you will never go back to other modes.

Yeah, I did persevere for a while, but the same issues kept reappearing. Without pixel peeping, it's mostly trees against the sky that look bad---although there it's very obvious. Zooming in reveals other issues, but that would be OK if it weren't for the first set of problems. Trees matter! ;)

Most of my lenses are vintage, but the wide angle one is my sole posh modern, the 16-35 F4L IS: sharp as you like, and practically free of CAs. The artefacts are often like Levas' image, posted to show the same thing, but I know I've seen some funkier cyan and yellow mixed in:

No blown-out highlights---I also use zebras and then dial things back---and I focus precisely on my subject. If being slightly out of the plane of focus means that the trees will inevitably show artefacts, I'm likely to resort to crop mode for most shots, unless I absolutely need a wider field of view than the 16mm end of my lens affords. No NDs---for the dusky scenes I tend to be filming, I don't find I need them (while staying at or below the approximate diffraction point of my lens). I also use MLV App to postprocess (default settings), although it's true I haven't tried 1.13. It's a shame, because I love the stability of the mode and using the full 16mm of the above lens instead of having a 24mm equivalent with added perspective distortion.

Still, these issues cropped up while filming on a trip, such that I didn't have time to do a thorough diagnosis. I'll find the time to sit down an do one, and I'll post some DNGs in the meantime, once I'm reunited with my material.

Raw Video / Re: AI algorithms for debinning
« on: July 18, 2021, 02:39:49 PM »

I think, the confusion comes from the fact that you are comparing a 6,9 Mp anamorphic image, [(1280x3)x1800] with 2,07 Mp cropped image (1920x1080).

Given the wonders of scaling algorithms, this is the key point (if fully dumb, such that they simply triplicated binned pixels, we'd have to resort to increased vertical resolution more-than offsetting decreased horizontal resolution, or somesuch, as an explanation for the anamorphic mode looking better). A fairer test would be a 1920x1080 image upscaled to 3840x2160, or something similar, and comparing the output. That way, the advantages of the increased input vertical resolution (before upscaling) in the anamorphic mode are divorced from output resolution.

Raw Video / Re: AI algorithms for debinning
« on: July 18, 2021, 09:31:28 AM »
Hello, Jonneh:
I was really surprised that the anamorphic UHD looks much sharper than the cropped 1080p. I understand that the anamorphic UHD uses larger sensor size, therefore, maybe better aesthetics. But I actually feel that the resolution of the anamorphic UHD is noticeably better than the cropped 1080p. The resolution of the anamorphic UHD looks close to the cropped 3k mode, yet the native resolution of the anamorphic UHD is just 30% more than the cropped 1080p. I really don't know why, just share here an observation that is against my previous assumption that the anamorphic modes do not help resolution.

You say that the anamorphic mode is "under more stress", which I take to mean that a given object will be smaller in the field of view, but are you comparing objects side-by-side, such that you can judge the resolving power? Otherwise, what you're noticing is likely a general impression, and as you say in your reply to Bilal, part of this is probably the upscaling algorithm doing its thing. Since the interpolated pixels are created somewhat intelligently, although no information is gained in the strict sense, the illusion is maintained that it is. I doubt any metadata is stored relating to the prebinned pixels, but I could be wrong. After all, there's not much to represent other than the intensity of the RGB pixels, so at best you could sample the discarded pixels at a lower bit rate, but then the file would be significantly larger. What's the resolution of the 1080p mode on the 650D, out of interest?

Since you have some trees in your scene, are they against a sky? If so, have you compared that area at 100% zoom between both of the modes. I know the standard viewing experience isn't one of a pixel peeper, but I find I consistently have artefacts (coloured pixel blocks and jaggies) in tree branches against a bright sky that are visible at normal viewing distances, which has somewhat put me off those modes (which I tended to use for landscapes, often with trees in the scene). I now use (on the 5D3), one of the higher res crop modes with a wider lens. Are you using MLV App?

Raw Video / Re: AI algorithms for debinning
« on: July 18, 2021, 03:01:47 AM »
Quick correction: all cameras does bin pixels horizontally, but only 5D3 can bin pixels also vertically,

I stand corrected, thanks! (Original post edited to avoid misinformation). Although I knew about the horizontal binning, and that only the 5D3 had 3x3, I thought the anamorphic mode used line skipping. Good to know how it actually works.

Quick thought, and I'm sure I'm showing my ignorance here, but does one have the option of implementing (horizontal) line skipping instead of binning? If so, I wonder what the effect on aliasing would be if one alternated (cycled) the skipped pixels from frame to frame, such that you skip lines 2 and 3 in frame 1, 3 and 1 in frame 2, and 1 and 2 in frame 3 before repeating the loop. I imagine the illusion of temporal averaging would remove jaggies. Not so sure about moiré, but that might be improved too. One might see a shimmering effect at approx. 8 fps, but that might just look like inoffensive luminance noise, as long as the line alternation is done in both dimensions (and even that might not be necessary). Lots of speculation here though.

This may be moot if one is stuck with the native behaviour, but can't one sample the whole sensor and then discard unwanted lines before committing to memory? Maybe not. As you can tell, I don't have the first idea of how it works.  ;)

Raw Video / Re: AI algorithms for debinning
« on: July 18, 2021, 12:08:02 AM »
In the anamorphic mode you mention, there's horizontal binning, and the desqueezing algorithm (and it does matter which one you pick) then creates the missing pixels. If it looks better to you than the crop 1080p, that could be the reduced crop factor (unless you are compensating with a different focal length, but then other confounds may come into play, such as distortion or sharpness, probably both in favour of the anamorphic mode), or it could be because the final image has a higher vertical resolution. I read somewhere that the eye is more tolerant of reduced horizontal resolution, though I'm not sure if that's actually true.

Reverse Engineering / Re: UHS-I / SD cards investigation
« on: July 15, 2021, 04:59:40 PM »
Yes, thanks to "unknown soldiers" who gave me the opportunity to get a 5D3 and work on it :) .

Just when I was thinking it was about time we pooled some resources to buy one, the soldier stepped forward. Good on him or her. Enjoy it! :-)

Regarding the stability issues with the faster overclocking presets, the explanation you propose is interesting. It would be a boon if there turns out to be a workaround, but only if the data rate bottleneck can be overcome, naturally.

Regarding this apparent 135/150 MB/s bottleneck (I'd noticed the 135 MB/s limit before when the total rate with a fast CF + SD didn't benefit from SD overclocking in terms of total data rate---doing so does give more equal write ratios, of course, which is good in itself, since the CF and SD cards have the same capacity. Good to know that it's due to the lossless compression overhead.), I suppose working around that might be a big ask, but I confess to major ignorance on the matter.

On this topic, I'm slightly confused about what you mean by there being a 150MB/s bottleneck in PLAY mode. Are you referring to a read bottleneck, or rather a write bottleneck with LiveView off, or perhaps something else?

Great stuff all around, either way!

Raw Video / Post-hoc dark frame generation
« on: May 26, 2021, 07:05:24 AM »
I have seen good results from dark-frame subtraction (in MLV App) using either frames generated immediately before or after the clip to be processed or using frames generated some significant time afterwards, so this is something of a theoretical aside, but what are the factors that determine the pattern of noise on the sensor, such that using different conditions for dark-frame generation from those of the clip of interest will produce suboptimal results?

I imagine area of sensor being sampled (resolution and centering), shutter speed, and ISO will be the most important factors (all possible to reproduce at any time), but what influence do factors such as temperature or long-term variation in noise patterns over time play, such that long-after-the-fact darkframe generation and subtraction might produce suboptimal results?

Share Your Videos / Re: Gower, Wales - 5D MKIII and Mavic Mini test
« on: May 11, 2021, 11:57:34 AM »
Thank you! I used both a Zuiko Auto-W 28mm F/2.8 and a F.Zuiko 50mm F/1.8 OM :D

I have those two too. Brilliant little (tiny, rather!) lenses with perhaps the nicest focus rings I've ever found on a vintage lens. Enjoy them! Quick question on the subject: do you get infinity focusing with the 50 mm on the 5D3? What adapter are you using?

Regarding the continuous recording, remember too that higher ISOs (or rather, noise, which isn't the same thing) have a massive effect on the compressibility of the files and therefore the data rate and ability to record continuously. I don't know if this is a relevant factor in your scenes, but just in case.

Share Your Videos / Re: Gower, Wales - 5D MKIII and Mavic Mini test
« on: May 05, 2021, 03:47:58 PM »
Lovely shots!

Which Zuiko primes were you using, out of curiosity?

Share Your Videos / Re: Winter Ambient (5DII RAW)
« on: April 24, 2021, 08:02:49 PM »
Video is about capturing motion.  There is barely any motion in your video.  A slide show with static pictures wouid do a much better job.  Just my opinion.

I know what you mean, but I think I disagree. The minimal amount of motion that most of the scenes do have is very effective at conveying the absolute stillness of the various places, in a way that a slideshow wouldn't.

Share Your Videos / Re: Consumed (BAFTA Nominee) Shot on ML now public
« on: April 01, 2021, 02:12:45 PM »
A really thought provoking and tastefully shot piece. Contratulations on its release and the nomination!

It goes without saying that the quality is stunning. I saw the odd detail about your workflow in the trailer thread, but didn't see which ML preset you used. Was it the 3.5K crop mode or rather the FHD binned mode (with an extra crop in post)?

Pages: [1]