Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Jonneh

#1
Changing the frame rate from the FPS override menu does work while recording! I did a test filming a stopwatch and ramping up the fps from 5 to 20, and the maths in the output videos comes out as expected. I haven't checked whether fine tuning also works on the fly, although I suppose the default expectation is that it would work in the same way. The FPS override function is particularly buggy, so I required several battery removals before I got the test running, but I'm sure I'll identify the precipitating combinations of factors in due course.

The proof of the various aspects of the method will be in the pudding, of course. If it doesn't pan out, or doesn't produce an accurate enough result, the way you describe the functioning of the clocks and a general route to syncing them is very encouraging. Thanks! Anything in the hundreds of microseconds should produce good results.

Not long now until the second body is with me.
#2
Cheers names. I'm certainly going to be jumping in as soon as I can, but your initial ideas are very helpful. I really like the frame-rate tweaking idea --- it certainly works on my mental simulator, and hopefully in practice too.

Syncing the clocks also sounds good, albeit requiring more work, and using PTP to do it sounds like it is beyond my current abilities as a programmer/hardware tinkerer. I assumed that initiation jitter would be too much for this kind of technique to help, but perhaps that is unfair.

The more I think about it, the more things seem possible with this kind of setup beyond those listed above (3D, intentional parallax shifts, double framerate recording, focus stacked video...). If it hasn't been done, perhaps that means I'm about to run into a bunch of obstacles, but I suppose we'll see.

EDIT: I don't have the second 5D3 yet, but I had a look into producing the strobe banding that should enable me to synchronise the offsets. I expected to need an Arduino, but I found a fantastic little Android app called Strobily that strobes the torch with controls for frequency (to three decimal places) and duty cycle (1% increments, equal to 400 microseconds at 25 fps ). I never managed to catch a whole band (at low duty cycles) but increasing the duty cycle makes it easy enough to catch a leading or trailing edge, which should be enough. There is some degree of jitter from frame to frame (I would estimate a standard deviation of perhaps 20% of the 10x zoom frame, but there is little to no precession over time, so three cheers to both ML (with FPS override on) and Strobily. Looking good!
#3
Through a strange (and wonderful) turn of events, I've ended up with two 5D3s in my hands, and I plan to have some fun with them for an upcoming short film. Things I'd like to play with range from high dynamic range capture, stitching for double resolution, fusing lens effects, to fusing different parts of the EM spectrum. Most of these require close to pixel-level alignment of objects in the frame for good results. In this context, I'm not too worried about parallax error since most scenes would be shot at long distance, but inter-frame synchronisation between the two cameras is going to be an obstacle.

Based on examples I've seen (e.g. https://joancharmant.com/blog/sub-frame-accurate-synchronization-of-two-usb-cameras/), I think I'll need sub-millisecond synchronisation to have acceptable alignment and avoid excessive loss of detail/double images. At 25fps, I can expect anywhere from 0 to 20ms of misalignment with close to random initiation of recording at this temporal resolution. Launching randomly (that is, random with respect to this high temporal resolution), I make it that I will have to make about 50 attempts to have a 90% chance of happening upon a sub-millisecond inter-frame difference between the two cameras' streams --- not practical.

Two questions then arise: 1) how I can know whether I've achieved this and 2) how this hit rate can be improved upon.

For 1), I can use the process described in the above link: set up a strobe synced to the frame rate and play with the duration such that I can align the resulting banding between the two cameras.

For 2), does anyone have any ideas about which of ML's features can be leveraged to get pretty close to sub-millisecond synchronisation, such that I might only require a few attempts to get a very small gap between the two streams? I'm wondering if the pre-record option in the RAW video menu combined with the recording trigger option might ready the buffer such that on launching recording with a Y-split remote release cable or similar, the latency might be low enough to consistently get low single digit offsets, for example, from whence I can simply retrigger manually until I get something acceptable.

P.S. I'm optimistically assuming that there won't be any precession of one stream in relation to the other over time, but I have no idea whether or not this will indeed be the case.
#4
Hmm, this looked promising but I couldn't get it working (Dokan 1.0.3.1000 on Windows 10 x64). After a bit of digging, it turned out that it was the --resolve-renaming parameter that was causing an I/O error when trying to enter the mounted Dokan (Z:) drive and Resolve to crash when importing the drive as a source. Has that been a problem for anyone else? It doesn't seem so. Everything works fine with that parameter removed, and the clips load as image sequences in both Resolve and Premiere anyway, so I'm not sure it's necessary.

I'd love to get this working with Lightroom too for its new AI denoise function for clips that are really beyond the pale, but it tends to unmount the MLVFS-created drive as soon as one starts importing.

P.S. I naturally had to run the mlvlinks.bat script first for the process to work, although this doesn't appear in the readme. I ended up just adding the mlvfs.bat prompt to to the mlvlinks.bat script to complete both tasks with a single click.
#5
Gorgeous. The look in this mode and OOC colours are really special, even with the great cameras that have come out in the last few years.
#6
In my hands I get fairly similar CF:SD write ratios of around 1.2:1 for both 3.8K and 3.5K modes (1536 and 1730 vertical, respectively). The 5.7K anamorphic mode, on the other hand, is wildly unbalanced, at something like 2.5:1.

With a fast CF card and SD overclocking, the sum of the max speeds when not card spanning (something like 90 and 80 MB/s) is well over the maximum combined speed when card spanning (approx. 140 MB/s), so perhaps that'd mean there'd be room for balancing the write load without reducing the total data rate?

Either way, it's true that some modes are more reliable than others in a way that doesn't seem to depend only on the required data rate. I always assumed it was due to different resolutions differentially affecting the compressibility of the image depending on the type of scene (some modes seem fine until faced with highly detailed high key regions whereas others are closer to the edge but seem less fazed by such areas), but this is pure speculation.
#7
Really nice! Simple and beautiful while still telling a story. Love the detail and colours.

What are the technical details of the timelapse, and how did you do the pan down?
#8
Quote from: a.sintes on May 28, 2023, 07:24:59 PM
Note anyway HDMI output doesn't work properly on 5D3 when switching to any crop modes - at least using latest Danne's build (no "framed" preview, only the "realtime" which is unsuable in most cases).

With the mirroring function on 1.2.3 you get the cropped realtime preview on the external monitor and the correctly framed preview on the camera LCD at the same time (currently the "ultrafast gray" preview in Danne's latest build). The former is great for focusing (if the subject is in the centre of the frame) and the latter is low res but good enough for framing.

Unfortunately one does lose about "1 stop" of speed in 1.2.3 versus 1.1.3, such that you have to step down from 14 to 12 bit, from 2:07 aspect ratio to 2.2:1 in 3.5K mode, etc. The choice is yours!
#9
Share Your Videos / Re: 5d3 3840*1536 video!
May 26, 2023, 11:24:26 AM
Looks incredible. What recording bit depth were you using here?

P.S. How lucky to see a beautiful red squirrel (although less so to have it scrabbling around in your roof cavity). Good job with the fix.

EDIT: I played around with this this afternoon and was getting pretty reliable recording at 3840x1536 at 14 bits and with the preview left on (overclocking at 240 and all of the hacks on).

The incremental changes by Danne and Bilal (possibly with input from dpjpandone?) over the last couple of years have really boosted the usability of the camera in terms of crashes, recording times at higher res modes and with higher bit depths, and the simply wonderful ultrafast_gray preview, which is perfectly good enough for framing and lets me use the x5 mode for focus on a phone/external monitor. It's a perfect setup! My girlfriend uses the Ursa Mini and A7SIII, and although they're great, she's always lusting after the ML'd 5D3's colours and rendering. :-P
#10
Quote from: bobolee on May 25, 2023, 09:02:01 AM
See the world as what it is!

The world is quantum fields. Representing anything above that is a matter of what most closely resembles our experience. We can probably agree that very few people see purple fringing (although human eyes suffer from all sorts of chromatic aberrations, many of which are corrected or at least smoothed over upstream), but when we get into colour and perspective, pretty much everything is up for grabs.
#11
Quote from: nk87 on May 24, 2023, 12:16:47 PM
All hyperlapses here shot on 60d. Stopmotion, wide angle from the top was shot on EOS R and some close up from above shot on 70d

Good to know! The detail, even at 1080p and with youtube compression, is bonkers. I admire the patience you must have for some of those shots: tiny increments in approaches, pans, zooms, etc. 
#12
Quote from: bobolee on May 24, 2023, 10:53:19 AM
I don't like any lens other than Canon EF-M 22mm f/2 STM because of the lens distortion issue of the ML system.If you shoot with the original cannon video function and with a cannon lens,the camera "knows" the lens distortion and will fix it automatically.The MLV APP has no lens distortion correcting function as far as I know,so if you want to get distortion free video,you have to use a lens with minimal distortion,that 22mm 2.0 lens has almost no distortion. CCTV lens?The idea itself is a laughable :-*

I don't understand your reasoning here.

This presumably only applies to EF(-S) lenses used with an adapter with contacts. Such lenses used with a dumb adapter or non Canon lenses used with any adapter will show distortion in any mode. Correction can be done in post as normal, if desired.

Even then, certain types of distortion, such as moustache, are likely to affect heavily cropped capture areas less than they would an uncropped image since the flatter part of the moustache is what is cropped, meaning that one might get better results with this category of lenses/adapters in crop modes than in non-crop modes such as the Canon one.

Finally, one may like the distortion given by some lenses. My two most used wide angles are the 16-35 f4L and the Zuiko 24mm f2.8. If 24mm is wide enough for the scene, I often prefer to use the Zuiko because I find the corners produced by the Canon lens, sharp as they are and undistorted as they are, to be very distracting. They have a stretched out look that results from them being further from the image center than the adjacent image edges. They don't suffer from lens distortion but they are also very unnatural looking because of perspective distortion. The Zuiko with its mild moustache produces a much more eye-like scene.
#13
Quote from: names_are_hard on April 29, 2023, 08:08:11 PM
A mod split this out, so I'll bother giving a longer reply...

Fantastic post --- a perfect 101 for why what gets done gets done and why what doesn't, doesn't.
#14
I have a slight annoyance to report (essentially the only one in the otherwise very smooth MLV App experience), which is that error messages often pop up in an infinite loop, forcing me to Ctrl+Alt+Delete and kill the program and then wait for the hundreds of clips I typically have in the project to load again on restart.

This infinite loop most often happens when opening empty 2KB files sometimes generated by ML on my cameras (5D3 and 100D): I get an error message saying that the video file contains no frames, or something similar, and upon clicking OK, the message pops up again, I click OK, it pops up again, and so on ad infinitum. This is the most common initial cause for the error message in my case, but I've had the problem with different errors. I'm using v1.14.

Is there a workaround for this, beyond simply not importing the empty files in the first place (as I say, I've had the problem with other error causes too)?
#15
I've put in about 16 hours of filming in over the past few days (100D), and the build has indeed been fairly stable and bug free. My takes don't typically go beyond two minutes in length, for what it's worth.

I do have a couple of bugs to report:

- When initiating recording from the x10 mode, I get a raw detect error or "recording stopped automagically" and a 2kb file is produced. I often frame first and then set focus in the x10 mode and start recording from there.

Another few bugs appear occasionally, although I haven't been able to identify the specific conditions that reproduce the issues.

- I occasionally get staggered horizontal lines in the preview.



- Pressing the up and down arrows around the set button crash ML, requiring battery removal.

I'll post back if I get a better idea of the causes of the above.
#16
I'm a little late to the party, but I admit to jumping out of my seat with excitement when I saw that this release had appeared. I know we MLers are big on image quality (and Crop Mood delivers on that front too), but usability makes an enormous difference to both efficiency and pleasure while filming. The experience of precisely framing and controlling a smooth pan with accurate feedback on my 100D is wonderful, to the extent that I know I'm going to suffer when I go back to my 5D3 for its low-light performance and dynamic range. Real-time uncropped preview seems like a very hard nut to crack on that one.

So congratulations and thanks Bilal for some impressive breakthroughs!

I'm getting continuous recording in the 2.8K mode with 12-bit lossless in challenging scenes and at 3K with 12-bit lossless or 2.8K with 14-bit lossless with less challenging exposure (Extreme Pro 170Mb/s card, 240 MHz overclocking and with the various hacks turned on and global draw off). I'm a crop mode guy, so I haven't done any testing with the anamorphic modes. Also, I haven't done extensive testing with Danne's build, so I can't make direct comparisons on that front, but I'm thrilled either way.
#17
Nuts!

Which scenes used the 60/70D and which used the EOS R, out of curiosity?
#18
Donation sent as USDT, Bilal, which made the most sense. Here's hoping a suitable camera appears on the market soon and that you have the donations in hand when it does.
#19
+1 on the donor list (added to the survey). I hadn't seen the posts about the impossibility of PayPal. Crypto would be fine too.

Option 2: I have a 100D that I'd be happy to lend for an extended time (say, 8 months), but the camera has been around the world with me, and I'd be keen to have it back one day (pure sentimentality). Depending on shipping and customs costs and timescales, this may be inviable. Depending on the expected costs, I'd be happy to pay outward and return shipping. Let me know, Bilal, if this sounds feasible.
#20
Raw Video / Re: AI algorithms for debinning
July 22, 2021, 09:48:56 PM
Quote from: mlrocks on July 22, 2021, 09:45:35 PM
I did not have a dng shot for the scene to compare the video footage. I think IDA_ML already did that test.

Oh, I just meant a DNG corresponding to one frame of the video, not a CR2 raw file, but you may not have the file in that format. A jpeg screen capture would do, but not to worry otherwise.
#21
Raw Video / Re: AI algorithms for debinning
July 22, 2021, 09:14:50 PM
Quote from: masc on July 21, 2021, 09:44:13 PM
Don't use the MLVApp sharpen sliders for anamorphic footage. Stretching is done after sharpening, so you'll get bad lines. Better sharpen after the export in your NLE. By default, MLVApp doesn't sharpen at all.

Good to know. Does this order of precedence have to be the way it is? Intuitively, I would think that most operations, and not just sharpening, would be best done on the resized image, but I could be wrong about that. I don't typically need (like) to sharpen, so it's unlikely that I did when I noticed the artefacts, but I'll bear it in mind for when I do a comparison and troubleshooting. By "default", I was actually referring to the resizing algorithm---good to know it's AVIR.

Quote from: mlrocks on July 21, 2021, 10:25:27 PM
I just did a test on 5D3 in the following modes: 1x1 UHD, and 1x3 anamorphic UHD.

Out of curiosity, do you have a DNG (or just a jpeg) of the anamorphic shot where you see the difference in quality in how the leaves are rendered? I'd be interested in seeing if our results are comparable. If not, any jaggies and colour artefacts, or just a general softness? And did you focus on the trees or somewhere else (depending on this distance, all may be in focus with a 28mm, so this might be moot)?

I'll do some tests of my own in a few days' time. I have the 100D with its own anamorphic mode to compare results

NB: Interesting reflections on the state of the industry vis-à-vis motion blur and resolutions in your last reply to me. Good to hear from someone following these trends---I'm just a hobbyist who doesn't watch series. I'm told I should. ;)

Quote from: IDA_ML on July 22, 2021, 06:00:40 PM
I am wondering what is so special about the green leaves.  Is it the green color, maybe, that causes the trouble?

I always assumed it was the high contrast of a backlit object combined with the typical intricacy of branches and leaves. I've seen similar problems on silhouetted trees, so I don't think it's the green, although it was a plausible guess!
#22
Raw Video / Re: AI algorithms for debinning
July 21, 2021, 05:06:37 PM
Quote from: theBilalFakhouri on July 21, 2021, 04:17:40 PM
We use PLAY mode for running cards benchmarks since in this mode there is no overhead happening by anything, so this mode gives us the highest CF/SD card controller speed.

Gotcha---good to know!
#23
Raw Video / Re: AI algorithms for debinning
July 21, 2021, 01:47:00 PM
Quote from: theBilalFakhouri on July 20, 2021, 09:37:14 PM
In order to increase vertical resolution you need to increase FPS Timer B (increasing FPS Timer B decreases FPS) , I could do 1920x3072 1x3 @ ~20 FPS, but not in 24 FPS, in this case we need to lower FPS Timer B to get 24 FPS in 1920x3072 in 1x3 mode, but doing that broke the image and might freeze the camera . . it's weird because we didn't hit read out speed limit yet . . there are other *head* timers are related to FPS Timers, tweaking it are not enough . . maybe there are other registers needed to tweak . .


Fascinating how byzantine the gears and levers are that need to be moved to get a desired result. Proper detective work!

Quote from: theBilalFakhouri on July 20, 2021, 09:37:14 PM
In LiveView the camera uses more memory cycles resulting in lower write speeds, lowering framerate from "FPS override" helps .

In my previous tests, maximum write speed with card spanning in PLAY mode is ~155 MB/s (5D3.123) using 160 MHz, 192 MHz and 240 MHz overclock presets, in LiveView the write speed decreases a bit due to more memory cycles are used which became ~135 MB/s write speed when the framerate @ 24 FPS . .

~155 MB/s write speed limit in PLAY mode is coming from the memory (RAM), so I think it's a memory speed limit here . . bypassing this memory speed limit somehow may increase card-spanning write speed in theory . .

I see---that makes things clearer. So the 155MB/s is a RAM bottleneck and the 135MB/s (or 139, per Bender's current record) is the same minus the LiveView overhead. I'm sure I'm missing something obvious here, but what is PLAY mode? As I know it, it's just for playback, and no writing occurs there.
#24
Raw Video / Re: AI algorithms for debinning
July 20, 2021, 07:00:07 PM
Quote from: theBilalFakhouri on July 20, 2021, 04:48:05 PM
There is no 3584x1730 crop mode in 5D3


It might be slightly modified in the latest build (don't have my camera with me to check), but it was there in Danne's September 2020 build (see post 619 here: https://www.magiclantern.fm/forum/index.php?topic=23041.600). Either way, I take your point that there seem to be different sensor readout limits in different modes, which is very interesting (I'm assuming you have a way of know that the limiting factor in each case is indeed the readout speed, and not something else).

Quote from: theBilalFakhouri on July 20, 2021, 04:48:05 PM

we can already do 3072x1920 @ 24 FPS in 1:1 mode, but we can't achieve 1920x3072 @ 24 FPS in 1x3 Binning (anamorphic) mode on 5D3, even if it the same read out speed


Weird! If the binning is done at the analogue level, could this affect the readout speed?

What I'm still in the dark about is where the 135 MB/s card-spanning write-speed limit comes from. Is this another mystery?
#25
Raw Video / Re: AI algorithms for debinning
July 20, 2021, 03:15:31 PM
Quote from: Levas on July 20, 2021, 11:23:31 AM
The read out speed I'm talking about is literally the time it takes to read out the sensor.

Very interesting, thanks for the explanation.

Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?

Quote from: Levas on July 20, 2021, 11:23:31 AM

Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.

If this is the case, why is it that the maximum observed speed of around 135 MB/s when card spanning is less than the sum of the max speeds to CF and SD cards (approx. 90 + 70 = 160 MB/s) when not card spanning?