How to enter 5x crop and move crop window from lua?

Started by ilia3101, January 07, 2018, 06:26:42 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

ilia3101

Is there a way in the lua api to enable crop mode and precisely set position of crop window?

edit: Argh so sorry!!!! meant to put this in the Q and A. Can it be moved?

a1ex

To enable the crop mode: http://builds.magiclantern.fm/lua_api/modules/lv.html#zoom


lv.zoom = 5;


To move it around, it's not exposed, but it's not hard to do so: a wrapper around move_lv_afframe / get_afframe_pos / get_afframe_sensor_res should do the trick (maybe hiding all the complexity and using some logical coords, like percentage or 0...720, 0...480). Wanna try?

5D2 has some issues with accurate positioning when AF is enabled; see comments and workarounds in move_lv_afframe (shoot.c). It's very old code, probably needs refactoring and a second look to figure it out.

Workaround with current API: "press" the joystick center button to center the focus box, then send as many direction presses as you need; at least, this one should be repeatable (positioning-wise) and requires no knowledge about LiveView internals.

ilia3101

Quote from: a1ex on January 07, 2018, 06:34:50 PM
To enable the crop mode: http://builds.magiclantern.fm/lua_api/modules/lv.html#zoom


lv.zoom = 5;

Ah thanks, not too good at searching through API pages.

Quote from: a1ex on January 07, 2018, 06:34:50 PM
Workaround with current API: "press" the joystick center button to center the focus box, then send as many direction presses as you need;

Thanks, will use that for now :)

Quote from: a1ex on January 07, 2018, 06:34:50 PM
To move it around, it's not exposed, but it's not hard to do so: a wrapper around move_lv_afframe / get_afframe_pos / get_afframe_sensor_res should do the trick (maybe hiding all the complexity and using some logical coords, like percentage or 0...720, 0...480). Wanna try?

Might want to try later on if I have good results with this.

Not what I'm doing here... but do you think it would be possible (probably not in lua) to alternate between two crop positions for every video frame?

a1ex

QuoteNot what I'm doing here... but do you think it would be possible (probably not in lua) to alternate between two crop positions for every video frame?

If you disable Canon's vertical offset corrections (and whatever other position-dependent calibrations they might be doing), and re-apply them from scratch in post, yes. I remember g3gg0 has a proof of concept somewhere on youtube, maybe around 2012.

Unfortunately, the implementation will be camera-dependent (along the lines of crop_rec_4k). You can already play around with adtg_gui to find what registers may need tweaking for that.

Are you thinking at some sort of 4K with reduced temporal resolution at the borders of the frame, and full coverage in center?

ilia3101

Quote from: a1ex on January 07, 2018, 07:20:26 PM
If you disable Canon's vertical offset corrections (and whatever other position-dependent calibrations they might be doing), and re-apply them from scratch in post, yes. I remember g3gg0 has a proof of concept somewhere on youtube, maybe around 2012.

Unfortunately, the implementation will be camera-dependent (along the lines of crop_rec_4k). You can already play around with adtg_gui to find what registers may need tweaking for that.
Ah right, that seems a bit high level for me ::) And as it's camera-dependent maybe not even worth the time for such a non versatile feature.

Quote from: a1ex on January 07, 2018, 07:20:26 PM
Are you thinking at some sort of 4K with reduced temporal resolution at the borders of the frame, and full coverage in center?
Yep. I'm not sure why no one has really talked about it before, isn't it a pretty obvious solution for quite static shots? Or interview shots where only a subject in the middle moves a lot...

But as a first test I'm writing a script for taking panoramic video (one shot after another) along with a script to blend them together in the middle (though it would only look good in shots without action where the seam is).

Kharak

This made me think, as Ilia is mentioning alternating window frames on the sensor.

Say, one was recording 1920x1080 crop. Is it possible to record at 24 fps with alternating windows, one Furthest to the left and other furthest to the right to combine a 3D image ? or possibly even higher frame rates to make this effect seamless. I would not mind 24 fps stutter jump between the two frames.

and ofcourse, is the distance between furthest to the left to right, enough to Physically make a 3D Pop? As in, how far apart is the center of a 1920 crop from furthest to the left of the sensor to the right in mm/cm?

and last but not least, is having only one lens hindering possibility of a 3D perspective?

Just writing this, makes me think a1ex probably already considered it and dumped it because of the answers to these questions.

Still would like a smart explanation from someone.

once you go raw you never go back

Walter Schulz

You will get no 3D using sensor area shifting. Entrance pupil won't move -> You will get a stitched panorama.

a1ex

Quote from: Ilia3101 on January 07, 2018, 10:36:04 PM
Yep. I'm not sure why no one has really talked about it before, isn't it a pretty obvious solution for quite static shots? Or interview shots where only a subject in the middle moves a lot...

That should look a bit better than those vertical videos with massive blur on the sides :D

ilia3101

 :D


Quote from: Kharak on January 08, 2018, 02:52:47 AM
and last but not least, is having only one lens hindering possibility of a 3D perspective?

Ummm no... I did this ages ago already ;D Create a lens filter blue on one side, red on the other - put it on a lens with a big aperture, and you have 3d, red and blue glasses ready video
https://drive.google.com/file/d/1xyOoGdb5VvfAzkAHH2QBjak1oUVuu-pN/view
Only downside is it looks awful.

ilia3101

Back to this.

If I start recording a movie, using start() function in Lua, how can I make the program wait until recording stops (either by buffer running out or user stopping it)?

a1ex

Something like this:

movie.start()

while movie.recording do
  sleep(1)
end


Here's an example that lets a movie run for up to 1 minute (or until it stops by itself).

ilia3101


ilia3101

Finally realised i should also do "key.press(KEY.UNPRESS_UDLR)" after a press, but it seems not to work very well. Do I need to wait a few milliseconds before calling unpress? And what is the optimal amount. I just want to register one directional press to move the crop window.

a1ex

In theory, it should work without any delay (as the requests are queued and handled by GuiMainTask sequentially). In practice, YMMV.

You could post a minimal example; maybe it's something else. However, I'm currently without any cameras nearby, so... cannot test.

ilia3101

As I've discovered it doesn't seem to work perfectly, so I assumed delay had to be done. I will experiment a little bit and see. Just can't get moving around to work right every time. l'll be able post a sample in maybe half an hour.

currently i do something like;


for i=1,10; do
    key.press(KEY.RIGHT): msleep(some_value); key.press(KEY.UNPRESS_UDLR);
end

a1ex

In this case, you have to wait *after* releasing the direction key. The window movement happens in the LiveView task (Evf on new cams, LiveViewMgr on old ones), and it needs quite a bit of time to switch the video mode. You could start with one second, and try lower values until it can no longer keep up.

However, between the "press" and "unpress" event, you shouldn't need any delay, but 100ms or so won't hurt either.

Now, it is technically possible to move the zoom window around instantly (from one frame to another), but not by calling Canon functions. Overriding CMOS registers is a start (IIRC g3gg0 had a demo on this back in 2012), but it's not enough, as the black calibration (in particular, column offsets) has to be re-done. This is, according to my limited understanding, where most of the time goes during video mode switching. If one can figure out how to avoid the black calibration step (maybe by pre-computing it in advance, and just loading the right values when needed), instant video mode switching (from one frame to another, without any extra delay) might be doable.

You may get away without black calibration if you move the zoom window vertically (as the column offsets are likely to stay the same).

ilia3101

Quote from: a1ex on September 01, 2019, 06:58:58 PM
In this case, you have to wait *after* releasing the direction key. The window movement happens in the LiveView task (Evf on new cams, LiveViewMgr on old ones), and it needs quite a bit of time to switch the video mode. You could start with one second, and try lower values until it can no longer keep up.

However, between the "press" and "unpress" event, you shouldn't need any delay, but 100ms or so won't hurt either.

Ah delay after the press! Now it works! I tried 500ms still works I will try out some lower values. Seems quite slow.

Edit: 260ms seems like a good time to wait at the end. 235ms leads to some failures, 250ms seems to work fine, so I'll go with 260ms.

Quote from: a1ex on September 01, 2019, 06:58:58 PM
Now, it is technically possible to move the zoom window around instantly (from one frame to another), but not by calling Canon functions. Overriding CMOS registers is a start (IIRC g3gg0 had a demo on this back in 2012), but it's not enough, as the black calibration (in particular, column offsets) has to be re-done. This is, according to my limited understanding, where most of the time goes during video mode switching. If one can figure out how to avoid the black calibration step (maybe by pre-computing it in advance, and just loading the right values when needed), instant video mode switching (from one frame to another, without any extra delay) might be doable.

You may get away without black calibration if you move the zoom window vertically (as the column offsets are likely to stay the same).

This would all need to be done through C though wouldn't it? I guess if I decide to try doing a module :)

Quote from: a1ex on September 01, 2019, 06:58:58 PM
You may get away without black calibration if you move the zoom window vertically (as the column offsets are likely to stay the same).

So it may be more efficient to move the window more often vertically and less often horizontally, even if using simple button presses from lua.

Here's my first working version:

-- 5D2 Panorama
-- Takes a panorama vid in crop 3008x1080 mode on 5D2

local delay_after = 260
local delay_between = 10

function Record()
    movie.start()
    msleep(4000);
    while movie.recording do
        msleep(1000)
    end
end

function Move(X, Y)
    if (X < 0) then
        X = -X
        for i = 1,X do
            key.press(KEY.LEFT); msleep(delay_between); key.press(KEY.UNPRESS_UDLR); msleep(delay_after);
        end
    else
        for i = 1,X do
            key.press(KEY.RIGHT); msleep(delay_between); key.press(KEY.UNPRESS_UDLR); msleep(delay_after);
        end
    end
    if (Y < 0) then
        Y = -Y
        for i = 1,Y do
            key.press(KEY.DOWN); msleep(delay_between); key.press(KEY.UNPRESS_UDLR); msleep(delay_after);
        end
    else
        for i = 1,Y do
            key.press(KEY.UP); msleep(delay_between); key.press(KEY.UNPRESS_UDLR); msleep(delay_after);
        end
    end
end

--[[ Window should be in top left corner for this to work ]]

msleep(1000)
lv.zoom = 5
msleep(1000)

--[[ 6 DOWN, 10 RIGHT ]]
Move(10, -6)
--[[ Record ]]
Record()
--[[ 27 Right ]]
Move(27, 0)
--[[ Record ]]
Record()
--[[ 12 Down ]]
Move(0, -12)
--[[ Record ]]
Record()
--[[ 27 Left ]]
Move(-27, 0)
--[[ Record ]]
Record()
--[[ 10 Down ]]
Move(0, -10)
--[[ Record ]]
Record()
--[[ 27 Right ]]
Move(27, 0)
--[[ Record ]]
Record()

--[[ Done ]]


Just filmed my first test clip(s), going to try it out in my mlv stitching experiment (https://www.magiclantern.fm/forum/index.php?topic=20025.msg219905#msg219905)

ilia3101



5640x3000!

Will try and get some actual shots at sunrise.

Danne


ilia3101

Maybe I have not made what I'm doing very clear. It's basically this: I've written a script that moves the crop window around the sensor and records crop videos at six different locations, then those videos can be stitched together in experimental MLV stitcher. This achieves 5.6K resolution on the 5D mark 2.

If this turns out to be good I may work a little bit more on MLV stitcher, get it working on other operating systems.

The biggest limitation to this method is that it will only work on very static shots, like landscapes with some grass moving, nothing more, maybe some objects can move too, as long as they stay within one of the frames.

Slightly boring sample: https://drive.google.com/open?id=1B6EBMWlVyZYLm4Tp2iLVc3UvgmOm0pWU

DeafEyeJedi

Quote from: Danne on September 01, 2019, 10:04:17 PM
What are you up to  :o 8)?  Stitching images?

Ha Ha @Danne!

Quote from: Ilia3101 on September 01, 2019, 10:21:06 PM
The biggest limitation to this method is that it will only work on very static shots, like landscapes with some grass moving, nothing more, maybe some objects can move too, as long as they stay within one of the frames.

Hmmm this is indeed interesting.  :o
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

Danne

Quote from: Ilia3101 on September 01, 2019, 10:21:06 PM
Slightly boring sample: https://drive.google.com/open?id=1B6EBMWlVyZYLm4Tp2iLVc3UvgmOm0pWU
Perfect example. But how are you getting 13 frames? I am thinking after stitching there will be only one frame? I think you come very far already, code and stitching already working.
I tried some getting it to work for mac but no luck. Early help would be appreciated.
As a1ex mentions working the cmos regs would be fast. Maybe per frame basis? And hopefully calibration step could be included here but still. For static objects seems to work nicely already.

a1ex

For static objects, you could use plain old still pictures, or - if video is desired - full-res LiveView (4 fps on 5D2, at 5632x3752).

If a higher frame rate is desired, 1x3 sampling (1880x3752 resized to 5640x3752) would still use all pixels from the sensor. Expecting about 10 fps for this one, on 5D2. Recovering the full resolution from this might be possible with some artificial intelligence, as I couldn't get anything interesting with "classic" techniques :P

Per-frame adjustment of the readout window might be interesting (full frame rate in the middle of the image, temporal interpolation on the sides), but not trivial to implement. This kind of temporal interpolation might be applicable to dual iso video, too (as it currently uses only spatial interpolation, i.e. each frame is rendered as a still picture, without considering other frames).

ilia3101

Quote from: Danne on September 02, 2019, 08:14:59 AM
Perfect example. But how are you getting 13 frames? I am thinking after stitching there will be only one frame? I think you come very far already, code and stitching already working.
I tried some getting it to work for mac but no luck. Early help would be appreciated.
As a1ex mentions working the cmos regs would be fast. Maybe per frame basis? And hopefully calibration step could be included here but still. For static objects seems to work nicely already.

Originally it was 180 frames, cut it down for size. It was a panorama of six videos, each about 8 seconds long. I also have some much nicer test videos from this morning, will post in a bit.


Quote from: a1ex on September 02, 2019, 09:07:32 AM
For static objects, you could use plain old still pictures, or - if video is desired - full-res LiveView (4 fps on 5D2, at 5632x3752).

I would love to have lower FPS higher resolution video presets, but reddeercity seems to only want to make 24fps presets. If I knew how to do this stuff myself I would have already created many presets like that. And every time I try out adtg_gui I fail at replicating whatever I want to achieve.


Quote from: a1ex on September 02, 2019, 09:07:32 AM
If a higher frame rate is desired, 1x3 sampling (1880x3752 resized to 5640x3752) would still use all pixels from the sensor. Expecting about 10 fps for this one, on 5D2. Recovering the full resolution from this might be possible with some artificial intelligence, as I couldn't get anything interesting with "classic" techniques :P

Wow this is actually a very interesting idea to think about. Must be possible to recover with some AI. If only I knew how to do that stuff :( Reading some books,  still a noob though.

Quote from: a1ex on September 02, 2019, 09:07:32 AM
Per-frame adjustment of the readout window might be interesting (full frame rate in the middle of the image, temporal interpolation on the sides), but not trivial to implement. This kind of temporal interpolation might be applicable to dual iso video, too (as it currently uses only spatial interpolation, i.e. each frame is rendered as a still picture, without considering other frames).

This would be a interesting thing. Another idea for dual iso video: switch the low and high iso bands every other frame. That way it might be possible to get even better interpolation. But also very difficult to implement I guess.

a1ex

Quote from: Ilia3101 on September 02, 2019, 08:24:54 PM
Another idea for dual iso video: switch the low and high iso bands every other frame. That way it might be possible to get even better interpolation. But also very difficult to implement I guess.

Actually, I did exactly that, back then. Didn't try to postprocess such as sequence, though; it was just an early experiment.