Full-resolution silent pictures (silent.mo)

Started by a1ex, July 01, 2014, 05:11:15 PM

Previous topic - Next topic

0 Members and 4 Guests are viewing this topic.

chanrobi

Is the original post still accurate?

In particular are we still restricted to exposures not longer than 15s? (I am doing astro work so it would be great to do 30s/60s etc. without having to wear out the shutter.

On my EOS M.

garry23

A Thought/Idea

As we look forward to a 'resurgence' in ML development (fingers crossed and thanks to a1ex's tenacity) I would like to try out an idea, but first the usual caveat: forgive me if others have thought of this and/or tried it.

The idea is to create a pseudo ND feature using FRSP.

I say pseudo as it it will not behave like a true ND, but, with the right approach it should be pretty good.

I'm aware of a similar, frame averaging, feature in the XF IQ4, eg https://www.dtcommercialphoto.com/why-the-xf-iq4s-new-feature-update-makes-it-the-most-average-camera-in-the-world/

Then there is a slightly different approach in the OMD cameras, ie Livebulb.

If my understanding is correct, the current FRSP goes like this:

1. Enable the FRSP module and set shutter to between, say, 0.2 and 15s
2. Using the FRSP module take the FRSP
3. The FRSP module reads out the LV image into an 'image buffer', which takes about 200ms (say)
4. The FRSP module reads out a DNG to the card, which takes many seconds
5. You can now take another FRSP

My 'idea' is to ask the developers if we could implement the following:

1. Enable the FRSP module and set shutter to between, say, 1 and 10s (say). The module will check shutter times and provide feedback re acceptability
2. Using the FRSP module start taking the pseudo ND FRSP
3. The FRSP module reads out the 1st LV image into the 'image buffer', which takes about 200ms (say). This is a dead time in picture taking, which is why one needs the minimum shutter to be relatively long, eg 1-2s, say
4. The FRSP module now takes the next frame and the previous frame remains in the image buffer
5. Once the second frame is taken, the FRSP module merges the current frame into the image buffer, using a simple median statistic at each photosite
6. The FRSP module carries on doing steps 4 and 5 until the ND time is reached
7. The FRSP module reads out a DNG to the card, which takes many seconds
8. You can now take another FRSP or ND FRSP

Photographers will only need to carry a minimum number of NDs, ie sufficient to get into the 1-10s zone, from where the FRSP ND feature will allow them to 'dial in' longer ND times.

As I'm not a developer or fully understand C coding, I'm not sure how practical the above is. However, if it can be implemented, I believe it will give ML another killer (photography) feature, ie similar to the XF IQ4's frame averaging.


garry23

@Grey

Yes I know I can do that and use normal approaches to construct bracket sets for processing in post, eg https://photography.grayheron.net/2021/01/music-additional-notes.html

My suggestion is to do this in camera, ie before outputting the FRSP image.

;)

names_are_hard

Interesting idea.  Avoiding the high storage and time costs by averaging frames in cam is sensible for the use case and it sounds plausible to me, although I'm not an expert.  I know there's some partly understood ability to do HW accelerated ops similar to blending.  If this did cover averaging, it should be fast enough to be practical.  I recall Alex showing some gradient images that were HW processed.  I thought it was in the ProcessTwoInTwoOutLosslessPath thread, but I checked and didn't find them there.

Depending on length of exposures you could do this with CPU, and it doesn't sound that hard from a code perspective.  Whether pure CPU can do it at acceptable speeds, I don't know.

garry23

@names_are_hard

The median merge function is 'simply' (a+b)/2, that is sum the pixel values and half the result. As to how one does it at the pixel level I don't know, but someone will  ;)

Anyway, for now it's just a development marker.

Cheers

Garry

names_are_hard

Yeah, it's easy per calculation, but the CPU is pretty slow (or, the CPU access to RAM is slow?).  Rough estimate from how long Digic 7 takes to update the LV buffer using pure CPU, and assuming 5000*3000 for a RAW buffer, it might take 20 seconds to merge two frames using CPU.  HW accel should do it sub 1 second, probably quite a lot less than 1.  And it will eat much less battery.

Anyway, no point theory-crafting really, it's a good idea and needs a proper investigation at some point :)

Greg

Quote from: garry23 on January 15, 2021, 02:39:08 PM
@Grey

Yes I know I can do that and use normal approaches to construct bracket sets for processing in post, eg https://photography.grayheron.net/2021/01/music-additional-notes.html

My suggestion is to do this in camera, ie before outputting the FRSP image.

;)

My version doesn't have the delay between frames, create buffers before capturing.
Postprocess motion blur control is better. If you want to do it in camera and not wait a few minutes you have to use TWOADD (digic4) or EekoAddRawPath (digic5).
I've never had digic5 camera so I don't know how the Eeko works.

TWOADD low level frame subtraction - https://www.magiclantern.fm/forum/index.php?topic=13408.msg172108#msg172108
If you have an idea how to reconfigure to get adding frames, this should work.

garry23

@Greg (typo in name last time, sorry)

Fully understand 'your method', and doing things in post.

The in camera approach, which I'm sure I will be told is not feasible in the end on all cameras, but may work on some, eg 5D3b(?), has the advantage of creating a single image, ie in camera.

Thus, for a 1s exposure, aiming for, say, a 100s simulated ND capture, instead of 100 images on the card, you just have one.


Greg

This can work on all cameras (as long as there is enough RAM on the cheaper ones), but requires some research.
If you average a large number of frames, you will limit the DR by a 14bit file.  ;)

IDA_ML

Garry,

How does the number of merged images relate to the number of ND stops with this simple (a+b)/2 median merging method? 

a1ex

Quote from: garry23 on January 15, 2021, 06:22:30 PM
The median merge function is 'simply' (a+b)/2, that is sum the pixel values and half the result.

That's the arithmetic mean, and it's quite different from median. For example, median is a robust statistic (not affected by outliers, so you can use it for e.g. removing people from the image sequence). Arithmetic mean, on the other hand, "simply" simulates a long exposure, where every single frame contributes to the final image (so, any moving subjects would appear as ghosts).

https://www.mathsisfun.com/data/outliers.html
https://www.cese.nsw.gov.au/effective-practices/using-data-with-confidence-main/outliers

Now, the question is whether the Eeko routines are actually useful to implement this stuff. Short answer: maybe.

Median: to compute it, you need to store all images. This is going to be tricky with full-res images, but an approximation like frugal median might be doable.

Advantages of frugal median (compared to median):
- only one "accumulator" image is required for Frugal-1U (check the paper)
- it might be good enough for practical purposes (not too far from the real median)

Difficult points:
- how to compute the sign of the image difference with Eeko routines?
- may require applying a logarithmic curve before and after processing (can it be HW-accelerated? no idea)
- a large amount of frames is required for convergence (possibly hundreds)

See also:
http://content.research.neustar.biz/blog/frugal.html
https://agkn.wordpress.com/2013/09/16/sketch-of-the-day-frugal-streaming/

Average: should be straightforward with a small number of images, but expecting precision loss with a large image sequence.

What's the matter with this precision loss? The hardware-accelerated routines work with integer values. Additionally, you don't want clipping in the highlights.

If the number of frames is known in advance, one could naively average N images like this:

acc = 0
for i in range(N):
    acc += image[i] // N    # integer division


What's the problem?

octave:1> a = round(rand(1,8) * 10)
a =
   0   7   9   9   3   1   1   7
octave:2> sum(floor(a / length(a)))
ans =  2
octave:3> mean(a)
ans =  4.6250

octave:4> a = round(rand(1,32) * 10)
a =
   8   1   9   4   4   5   9   9   1   2   5   7   8   1   4   4   6   9   9   9   7   7   1   6   4   4   1   2   2   3   2   2
octave:5> sum(floor(a / length(a)))
ans = 0
octave:6> mean(a)
ans =  4.8438


In other words, any shadow detail you may expect to get out of this naive averaging process, will be crushed to black. This gets worse as N increases.

There are several approaches possible to work around this precision loss, but these are left as an exercise to the reader. Constraints: up to 16-bit image buffers available for hardware acceleration, and only a tiny number of full-res images can fit into the camera RAM. The only hardware-accelerated image operations - known to us at the time of writing - are those described in the Eeko thread.

Another issue is how to handle clipped highlights while averaging. Can be simulated offline, with already-captured image sequences - again, exercise for the reader :)

garry23

@IDL@ML

QuoteHow does the number of merged images relate to the number of ND stops with this simple (a+b)/2 median merging method? 

With this approach one is not trying to emulate the number of ND stops, but the shutter time. We use ND filters to control time. With this approach 'any' LE becomes a 'simple' matter of dialing in the number.

The number of images to take is know, ie if the LE time you are seeking to emulate it T and the shutter time is t, then the number of images yuo will need is simply T/t.

@a1ex

QuoteThat's the arithmetic mean, and it's quite different from median.

My ignorance. I simply used this: https://www.mathsisfun.com/median.html, and because my idea is to only 'process' two images at a time, the maths becomes easy (a+b)/2.

I'm still thinking how we merge multiple stacks of images in Photoshop is the way to make this work.

In the PS merge multiple images approach, eg https://photography.grayheron.net/2018/04/post-processing-simulated-le-brackets.html, one adjust the opacity of each layer by an amount. Thus, in a four layer stack, the image opacity values for each layer would be (top to bottom), 1/4, 1/3, 1/2. 1/1. These layers are then simply merged, which in my naivety I thought was pairwise (a+b)/2.

Thus, if you have the first image taken in the 'image buffer', and the opacity is adjusted to 1/n, then the next images is 'median merge' into the image buffer having its opacity adjusted to 1/(n-1) first.

Once the final image is created in this way, it is read out as usual.

So the difference is, rather than the image buffer being purged each time, it is added to, with the captured image having had its opacity adjusted according to its position in the LE stack.

BTW this idea is better called LE simulation, rather than ND simulation. We are side stepping NDs :-)

a1ex

What you describe is probably the arithmetic mean, or maybe some approximation of it (as I don't know how Photoshop works); in any case, it's a linear combination of input images. Median cannot be computed that way, and has very different properties from arithmetic mean. Both of them are measures of central tendency, but that's pretty much where the similarity ends.

https://en.wikipedia.org/wiki/Median
https://en.wikipedia.org/wiki/Arithmetic_mean

https://petapixel.com/2013/05/29/a-look-at-reducing-noise-in-photographs-using-median-blending/
https://blog.keepcoding.ch/?p=2324
https://www.investopedia.com/terms/m/median.asp

garry23

@a1ex

Thanks for the education :-)

Do you think the idea, ie adjusting each image's opacity and summing is achievable?

Or is the idea dead on the starting block?

Cheers

Garry

a1ex


garry23

@a1ex

QuoteAnswered in reply #1086.

Sorry, I was trying to be specific to the opacity adjustment approach and using pairwise summation via (a+b)/2.

I'll stop posting on this now  ;)

Cheers

Garry

artden

Hi! Is there way to reduce "saving to SD" time? I use ML to astrophoto. Intervalometer is configured as "like crazy" but I see ML spends about 2-4 seconds to save image into SD on my 600D. So I lose about 15% of precious time of dark sky.


artden

I already did tests and selected MLV as format for faster saving. But MLV doesn't give large improvement for saving speed (( Is there way to save in background or something else?

Walter Schulz

Yes, I see. You are running Digic 4 with SD-card and all the perks are for Digic 5: sd-card overclocking, lossless compression. I just did a small test with 650D, overclocking and MLV_Lite module: Around 74 MLV frames/min. 7D with fast CF-card does around 44 frames/min.

user330

I found bulb_end_addr and capture_err_time_addr for Canon 650D 104:

bulb_end_addr = 0x24954;
capture_err_time_addr = 0xFF1DFBF4;

i try to BULB mode and exposure more than 30sec (in silent picture) - it works.

Maybee it will be useful for somewho..


elenhil

Can someone please help me troubleshoot my issue?

Something goes wrong with the LCD power off control when I try to combine FRSP with AETTR and/or Intervalometer. The LCD won't turn off after reviewing the pic. Naturally, I have Canon review on and LV Power Save timers on.

1) If a FRSP was taken with AETTR (with the Auto Snap trigger) and the exposure was OK (no correction from AETTR needed), the LCD turns off.
2) If a FRSP was taken with AETTR (with the Always On trigger) and the exposure was OK (no correction from AETTR needed), the LCD turns off.
3) If a FRSP was taken with AETTR (with the Always On trigger) and the exposure was wrong (AETTR had to make a correction for a *future* pic), the LCD turns off.
4) If a FRSP was taken with AETTR (with the Auto Snap trigger) and the exposure was wrong (AETTR had to make a correction *and* take the Auto Snapped extra pic), the LCD turns off after the first pic but doesn't turn off after the *extra* one (stays on indefinitely, save for actual FRSP capture, of course).
5) If a FRSP was taken with AETTR (with the Always On trigger) plus the Intervalometer, regardless of AETTR's reaction, the LCD stays on displaying the last reviewed pic (with the ETTR and Intervalometer readouts).

Obviously, turning off AETTR and/or Canon review makes the problem go away (but neuters ETTR). The same goes for choosing regular shuttered pics instead of FRSPs. It's just the combination of the three (clearly very desirable for timelapses etc) that fails.

Any ideas? Perhaps someone could review the code and find out what goes on in cases 4 and 5? I'm using ArcziPL's experimental build for 70D (Bilal also has his source code, in case you need it).

mgonidec

That sounds really promising. Could you indicate hoy you went about trying it? Would the interrupt be the same on different models? (Sorry, newbie here).

It'd be nice to have a new build with this updated functionality!

Quote from: user330 on June 06, 2021, 05:53:14 AM
I found bulb_end_addr and capture_err_time_addr for Canon 650D 104:

bulb_end_addr = 0x24954;
capture_err_time_addr = 0xFF1DFBF4;

i try to BULB mode and exposure more than 30sec (in silent picture) - it works.

Maybee it will be useful for somewho..

elakrab

Hi all

What is the status of the long-exposure / bulb FRSP pics module ?

Can it be combined with ETTR? with intervalometer?
Do I need to do any bulb_end_addr/capture_err_time_addr changes and recompile for a Canon 6D or is it included in some precompiled ML code?