Light Trails mode

Started by pursehouse, January 23, 2015, 07:47:04 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

pursehouse

haha @g3gg0

ok so... when a photo is taken, the shutter is opened ( or already open ).
The CMOS sensor is exposed to light.
The sensor software has it do a progressive scan across each pixel repeatedly, and continually add to the color value of each pixel.
So lets say during a 1s shutter time, it scans 1000 times ( depending on the sensor/resolution ).
Each of those scans add to each of the respective pixels over time to give the final resulting frame.

As far as I can tell, that scan and add code is driving the CMOS sensor. Is that code not accessible via the ML firmware? It seems that the dual pixel AF firmware upgrade from Canon a few years ago could have affected the CMOS' functionality.

Getting a final frame out of the CMOS is of course very speed limited in comparison to how fast the CMOS internally updates a scan of pixels in the future final frame.

I believe the some new CMOS chips are going to scan in groups of 4, but for now I believe everything on Canon is single pixel by pixel.

So I take it you are saying that there is no access to the CMOS scan processing code? That is a real shame :/

Well even if that's the case, it would still be huge to get the in-buffer merging done, it'd just be limited to whatever shots-per-second the sensor can handle outputting at the chosen resolution.
Just off the top of my head, the max()/avg()/min() ability in camera is huge for:
lightning photos ( leave in bulb mode in using max() until after lightning strikes )
waterfalls ( in avg or max mode, as dmilligan showed, but in full resolution raw file instead of low res flattened video )
light painting ( in max mode )
moving object omit/blur ( avg mode )
although I don't really do them, star trails would for sure be easier like this too ( like dmilligan brought up ), yes


as a traveling photographer, currently I have to do things like this with multiple dark ND filters, a large amount of time/testing/failure, and laptops in the wild are rarely an option, as well as storage space being fairly limited in comparison to dumping your 128gb card after every couple of tests, then going back after you compile and seeing things are totally wrong in post.

And photos like the one I shared of the trains, are flat out impossible without in camera processing ( assuming you want a high resolution raw ), due to card write speed and storage limitations for long length exposures.

Thoughts?

dmilligan

Quote from: pursehouse on January 29, 2015, 01:10:04 AM
The CMOS sensor is exposed to light.
The sensor software has it do a progressive scan across each pixel repeatedly, and continually add to the color value of each pixel.
That is not at all how CMOS sensors work. http://en.wikipedia.org/wiki/Active_pixel_sensor

Pixels (aka sensels) are basically just "photon buckets" (hence g3gg0's analogy). Photons land in them and are converted to electrical charge. The electrical charge accumulates during the exposure, then at the end of the exposure, the accumulated charge passes through an amplifier(s) to an ADC (analog to digital converter), which converts the analog voltage created by the charge to a digital number. We readout all the numbers for all the pixels and we get an image.

You can't control how photons accumulate on the sensels. They just accumulate, and this happens in the analog domain. You can only start doing custom processing once you readout the analog value of the sensel and convert it to a digital number. And doing that is actually capturing an image.

This is how all CMOS sensor work even Sony. Your 'light trails' mode on you Sony is just capturing normal images and then blending them on the fly. It is not some magical new sensor mode.


pursehouse

oooohkay... I had a very different understanding of how it worked. Guess I'm reading the wrong stuff. Lots to learn!

So, then the EekoAddRawPath is the way to go right? Each outgoing frame getting compiled into the resulting goal frame?

g3gg0

as alex somewhen pointed out, (iirc) bulb modes longer than 15 sec also do image addition with 15 sec exposures.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

pursehouse

that's interesting... that's in the default Canon setup or is that something ML made happen?

the 15 second exposure thing isn't really the same as a high rate frame. Because then you are dealing with basically averaging all the pixels still ( in 15 second sets, which is a lot of motion ), instead of a max on something like... let's say 1/60s sets. So you'd end up with a faint blur of the moving items, instead of a prominent streak.

It would be even more interesting if you could have 2(or more?) goal frames being compiled during the process... with the a-b,a+b,min,max options listed on the other page... then a person could toy with those different compilations of light over time in post. Could be rather interesting for artsy style photos...

Oh and average(a,b) as an option too... having a combination of those photos to work with in post could open up entire new worlds of options in editing. Assuming it is possible to do 2 outbound images at the same time, which I have no idea if it is possible or not :)

pursehouse

update! haha

ok so, the Sony feature is doing frames at a minimum length of 1 second (max 15 sec ) at a time.
Which, I guess makes the overhead of processing the frames a lot easier on the hardware...

So what'ya say guys? A1ex and G3gg0's Eeko/TwoAdd setup to do this feature? Can it happen? Can I help? I code, I just haven't coded ML before. It looks like the demo code that A1ex setup is almost this feature, unless I'm misunderstanding?