(...)
Any misunderstanding on my part, is a direct result of your short post, with counter claims that were based on a misunderstanding of the reported results. And no linking documentation to support your claims. He said, she said.
(...)
I totally understand and I totally agree that I could not pretend to be referent in the domain. Plus I admit doing errors sometimes (human factor, and I like it BTW).
So, let's see what we are talking about...
We are speaking about
CMOS and
DR, and
DR with dual_iso.mo in this particular case.
Some times ago in my job I spoke about close stuff with guys from the Stanford University, I think it could help here:
CMOS can be schematized by a 2D matrix sensor device where pixel voltage is read out one row at a time to column storage capacitors,
then read out using the column decoder and multiplexer. Row integration times are echeloned by row/column readout time.
I'm sure a lot of persons here already know it, it's just to introduce the context to be sure we will speak about the same thing.
Beside, it will clearly be a simplified version here; if you need more precisions please see references at the end.
We would like to get the maximum DR from this device, so the first thing to do is to analyse what is reducing the DR.
In our activity _daylight photography (I mean with short exposure time) with
regular thermal conditions (I mean what nobody use his camera @-40°C here, except maybe 1 or 2 persons with me count inside)_ we can find: TN (Temporal Noise), FPN (Fixed Pattern Noise), dark signal (also called dark current, etc. but for a question of physic I prefer to use signals, currents are for electronic) and spatial sampling and low pass filtering.
TN is caused by photodetector and MOS transistor thermal, shot, and 1/f noise. It can be lumped into three additive components: Integration noise (due to photodetector shot noise), RN (Reset Noise) and RON (Read Out Noise). Noise increases with signal, but so does the SNR (Signal to Noise Ratio) and under dark conditions presents a fundamental limit on sensor DR (Dynamic Range).
FPN is the spatial variation in pixel outputs under uniform illumination due to device and interconnect mismatches over the sensor and have two components: Offset and gain. FPN is most visible at low illumination because offset FPN is more important than gain FPN). This is why with a1ex we try to introduce new pedestal definition to optimize this point.
Dark signal comes from the leakage current at the integration node (i.e. current not induced by photogeneration) due to junction and transistor leakages. It limits the image sensor DR by introducing dark integration noise (due to shot noise), varying widely across the image sensor array causing FPN that cannot be easily removed (a1ex is working on this point right know, it's a hard task) and reducing signal swing.
So we could synthesis all of this in a photo-current to output charge model:
Q
0 = Q
(i) ⊕ Q
Shot ⊕ Q
Reset ⊕ Q
Readout ⊕ Q
FPNWhere:
Q
(i) is the sensor transfer function and is given by: {1/q(it
int) electrons for 0 < i < qQ
sat/t
int & Q
sat for i >= Q
sat/t
int} with Q
sat is the well capacity
Q
Shot is the noise charge due to integration (shot noise) and has average power 1/q(i
ph+i
dc)t
intHowever, to calculate SNR and dynamic range we use the model with equivalent input referred noise current.
Since it is linear we can readily find the average power of the equivalent input referred noise I
n (i.e. average input referred noise) power, to be:
σ
In² = q²/t
int²(1/q*(i
ph+i
dc)t
intσ
r²) A² where σ
r² = σ
reset²+σ
RON²+σ
FPN²
SNR is the ratio of the input signal power to the average input referred noise power, so using the average input referred noise power expression we get:
SNR(i
ph) = 10log
10(i
ph²/(q²/t
int²(1/q*(i
ph+i
dc)t
intσ
r²))
DR is defined as the ratio of the largest nonsaturating input signal to the smallest detectable input signal.
The analog output of the camera is subsequently quantized via A/D conversion to obtain a digital image. The number of gray levels in the image and the gain of the A/D convertor are usually adjusted such that the maximum gray level i
max corresponds to the FWC (Full Well Capacity) and the minimum level i
min corresponds to the minimum signal (RON) detectable by the sensor. The process of quantization itself introduces an additional noise, but we will ignore its contribution for simplicity (sorry, too long to explain here).
The largest nonsaturating signal given by:
i
max = qQ
sat/t
int- i
dcThe smallest detectable input signal defined as standard deviation of input referred noise under dark conditions σ
In(0) (the zero here refers to i
ph = 0), which gives:
i
min = q/t
int√[1/q(i
dct
intσ
r²)]
So the DR of the digitized image can be written as:
DR = 20log
10(i
max / i
min)
Hence,
DR = 20log
10(qQ
sat/t
int- i
dc)/(q/t
int√[1/q(i
dct
intσ
r²)])
DR with dual_iso.mo means we have to look at a DR from a kind of SVE (Spatially Varying Exposure) image.
Hence, DR of an SVE camera is:
DR
SVE = 20log
10[(i
max / i
min) (e
max / e
min)]
I don't replace all variables in the equation for reading consideration, it could be too complicate here in the forum.
As we can see when we take a picture with dual_iso.mo, each exposure is uniformly quantized but the set of exposures together produce a non-uniform quantization of scene radiance (i.e. we see horizontal lines on the camera screen). As noted by Madden (B. Madden, Extended Intensity Range Imaging. Technical Report MS-CIS-93-96, Grasp Laboratory, University of Pennsylvania, 1993), this non-uniformity can be advantageous as it represents a judicious allocation of resources (bits). Though the difference between quantization levels increases with scene radiance, the sensitivity to contrast remains more or less linear. This is because contrast is defined as brightness change normalized by brightness itself.
We now determine the total number of gray levels captured by an SVE imaging system. Let the total number of quantization levels produced at each pixel be
q and the number of different exposures in the pattern be
K. Then, the total number of unique quantization levels can be determined to be:
Q = q + ∑{K-1; K=0} R [(q-1) - (q-1) (e
K / e
K-1)]
Where R
(x) rounds-off to the closest integer.
This last equation is useful to compute ADU values from dual_iso images. Maybe a similar algorithm is used by ML and cr2HDR to compute DR gain from dual_iso; to be honest I don't take a look on the code yet.
Ref.: You can find full development of this in scientific papers plus some other references included:
_. High Dynamic Range Image Sensors, A. Gamal, Department of Electrical Engineering of Stanford University
_. High Dynamic Range Imaging: Spatially Varying Pixel Exposures, K. Nayar, Department of Computer Science of Columbia University & T. Mitsunaga, Media Processing Laboratories of Sony Corporation
_. E. Ikeda, Image data processing apparatus for processing combined image signals in order to extend dynamic range, September 1998
_. A. Morimura, Imaging method for a wide dynamic range and an imaging device for a wide dynamic range, October 1993
_. R. A. Street, High dynamic range segmented pixel sensor array, August 1998
_. Y. T. Tsai, Method and apparatus for extending the dynamic range of an electronic imaging system, May 1994