"Efficient HDR" Feature development - is it possible?

Started by jossef, October 13, 2013, 04:44:11 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

jossef

Hi all!

My name is Jossef and i'm a c++ developer, i would like to discuss about the possibility of a feature development i came up with.

The most common use case of HDR photography is to take something like 3 different shots and merging them afterwards.

the problem is this technique is based on different shots (each exposure begins from zero) and i find it not suitable / optimal for best result of moving objects (water waves, animals, driving vehicles etc... ).

i thought about a solution and i would like to share it and find out if the implementation is possible over Magic Lantern's code:

For this use case example, the total exposure time is 3 seconds and the camera sensor will capture without a break for 3 seconds. this is the desired sensor dump timeline:

    second 0 : start capuring
    second 1 : first sensor dump - hard light exposure while continue to capture and appending light
    second 2 : second sensor dump - mid light exposure while continue to capture and appending light
    second 3 : third sensor dump - dark light exposure and stopping capture

at the end we will have 3 files for the entire 3 seconds

In other words,
    is it possible to get a sensor dump while the sensor is capturing?
    is it possible to capture without erasing the previews sensor data?

Thanks,
Jossef.


600duser

In theory the best approach would be to log the time and location that each pixel became saturated or exceeded a certain threshold.

Photons over time (graph) on a per pixel basis.

The limitations are likely to be what the hardware can or cannot do. The hardware limitations dictating the software approach.


If the operation of the camera was known well enough then one could write PC software specifically for that model to compensate for its 'known idiosyncrasies'  That approach might be more attack-able and more fruitful for coders more experienced with  computer software than camera hardware.


3 seconds is a long exposure time, taking a video clip and using frame stacking might work best

If the iso, color balance, aperture whatever was altered during the recording then you have a big bucket of data that is broad and deep in terms of 'information'


1) Most cameras take video at 24 to 60fps

2) Most good cameras can alter settings while recording video ( think settings sweep)

3) Having saved the video clip to SD card it may well be possible to write an In camera frame stacking and alignment app to process that video. Off cam always better, in cam often more convenient....in camera video frame stacking and stitching would be a sweet feature indeed !...360 degree HDR panorama anyone  ?

Take a few video clips and import them into the carious video stacking software apps out there. Get a feel for  that method. Stacking Video is the future, and it always was going to be the future and will remain so until stills can be stacked at 1000fps





Waterfall problem.

You could perhaps write a script to take a single still at high shutter speed and then record video afterwards.  The still then acts as a mask. The background is then stacked and then the still 'mask' is filled in using the sifted data from frame stacking.  This is one way to freeze frame fast changing /moving objects yet still apply high levels of detail to the non moving background and some missing information to the changing/moving objects.

Hi speed shutter still (freeze frame) +

Slow shutter blurred (masking aid  for the frozen frame) +

video clip (HDR background and freeze frame after fill )+

Calibration tables based on camera settings, fine tuned with the HDR data for that specific shot




Reminds me of the Heisenberg uncertainty principle. ;)

One cannot know the precise location and the precise colour of a given pixel ....the above is a partial workaround


Here the fast frame and the slow frame gives the precise location of fast changing pixels (the mask) and the HDR video stacking gives you extra colour information about fast changing pixels as well as an HDR backdrop. You cant solve the unsolvable but you can figure out the best fudges.

Audionut

Quote from: jossef on October 13, 2013, 04:44:11 PM
In other words,
    is it possible to get a sensor dump while the sensor is capturing?
    is it possible to capture without erasing the previews sensor data?

To the best of my knowledge, no.  With dual ISO for instance, a bit is being flipped to direct 1 of the 2 ISO amplifiers to use a different gain. 
Being able to dump the same sensor data through the amps at different gains would be quite the breakthrough.  Dual ISO without any of the cons.

Also afaik, there is no way yet to clear the sensor and capture again without the shutter curtains closing.

Muf

It sounds like you're describing HDRx. So, it's definitely possible, but unlikely in the context of ML given the software limitations of Canons.

Joachim Buambeki

Quote from: Muf on October 14, 2013, 01:09:21 PM
It sounds like you're describing HDRx. So, it's definitely possible, but unlikely in the context of ML given the software limitations of Canons.
HDRx takes two exposures like every other camera, with its electronical shutter it is just possible to take them with no time inbetween at all - which makes joining the two exposures much easier (optical flow interpolation to match parts of the image that moved do the rest).

Muf

Quote from: Joachim Buambeki on October 14, 2013, 03:09:40 PM
HDRx takes two exposures like every other camera, with its electronical shutter it is just possible to take them with no time inbetween at all - which makes joining the two exposures much easier (optical flow interpolation to match parts of the image that moved do the rest).
Correct me if I'm wrong, but isn't that exactly what jossef is describing? ML just can't do it because you can't prevent the physical shutter from triggering.

Joachim Buambeki

Quote from: Muf on October 14, 2013, 06:46:31 PMCorrect me if I'm wrong, but isn't that exactly what jossef is describing? ML just can't do it because you can't prevent the physical shutter from triggering.
If I am not mistaken he is talking about reading out the sensor while it is still collecting photons, I would be surprised if you can do that with the existing hardware (if it is possible at all?!).