CMOS/ADTG/Digic register investigation on ISO

Started by a1ex, January 10, 2014, 12:11:01 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

naturalsound

Quote from: a1ex on March 05, 2014, 07:01:40 PM
Nope, answered here.

You do get an improvement, but it's small.

A low covariance is nothing special if there is a lot of Gaussian noise evident. The fact that there is some covariance tempted me to try a FPN removal.
The improvement in terms of dynamic range my be small. Nevertheless the improvement in visual quality can be significant.
But judge by yourself:
I took your sample ISO 200 pulled to 100 and split it in the top 80 and the lower 400 rows. Below you see the lower 400 without processing:

Then I did what g3gg0 just suggested. An vertical average (let's call it top) of the top 80. Below you see the result of a (floating point) subtraction of this median each line of the image pixels. After the subtraction I added Mean(top) to each pixel value to not shift the blacklevel and rounded the result to yield the new pixel values:

Now the same done in the vertical direction:


My subjective opinion is that removing the FPN can improve the visual quality significantly.

Please note that this test is just a quick&dirty one so there my be some errors induced during re-export of the data to .png format. Plus the input data is not optimal quantized as the histogram shows strong clipping at the borders:


(some typos corrected)

a1ex

Quotetop 80 and the lower 400 rows

=> you have assumed a OB as large as 658 pixels ;)

naturalsound

You're right, in my euphoria I missed the fact that your averaging provided me with lower Gaussian noise component / virtually increased OB.
I just took the values you used in your Reply #584.

I repeated my test with 10pixels (which should be about 80 pixels of OB), resulting in slight (but to my eye pleasing) vertical FPN reduction.

The effect on horizontal FPN is negligible.

Below the reduction in mean horizontal pixel values distribution for different (virtual) OB sizes:

It is apparent that for OB=40 the FPN may even get worse.

I hope with this post my tests contribute at least a little for those, who are not such familiar with FPN...

g3gg0


function ob_corr_plot(file)

    src_image=imread(file);

    %{
        average some top areas:
            1st: black clamp learning area, 4 through 28
            2nd: OB area that is just black, which is 42 to 79 on 5D3
            3rd: the top 500 pixels ofthe image data (dark frame) starting at 80
            4th: the whole dark frame
    %}
    vert_noise_1=avg_top(src_image, 4, 28);
    vert_noise_2=avg_top(src_image, 44, 38);
    vert_noise_3=avg_top(src_image, 80, 500);
    vert_noise_4=avg_top(src_image, 80, size(src_image)-80);

    %{
        now cut the left OB area and smooth over 256 pixels.
        should be gaussian, but for some tests it might be enough.
    %}
    smooth_1=smooth(vert_noise_1(122:1:end), 256);
    smooth_2=smooth(vert_noise_2(122:1:end), 256);
    smooth_3=smooth(vert_noise_3(122:1:end), 256);
    smooth_4=smooth(vert_noise_4(122:1:end), 256);
   
    % show correlation %
    corr(smooth_2, smooth_3)
   
    figure('name', 'Black clamp area');
        subplot(3,1,1);
        plot(smooth_1);
        title('Black clamp area profile');
        subplot(3,1,2);
        plot(smooth_2);
        title('Optical black area profile');
        subplot(3,1,3);
        plot(smooth_1, smooth_2);
        title('X/Y plot');
   
    figure('name', 'OB vs. Dark frame - top 500');
        subplot(3,1,1);
        plot(smooth_2);
        title('Optical black area profile');
        subplot(3,1,2);
        plot(smooth_3);
        title('Dark frame - top 500');
        subplot(3,1,3);
        plot(smooth_2, smooth_3);
        title('X/Y plot');
       
    figure('name', 'OB vs. Dark frame - full');
        subplot(3,1,1);
        plot(smooth_2);
        title('Optical black area profile');
        subplot(3,1,2);
        plot(smooth_4);
        title('Dark frame - top 500');
        subplot(3,1,3);
        plot(smooth_2, smooth_4);
        title('X/Y plot');
end

function ret = smooth(data, fact)
    for pos = 1:size(data)-fact
        ret(pos) = mean(data(pos : 1 : pos + fact));
    end
end

function ret = downsample(data, fact)
    for pos = 1:size(data)/fact
        ret(pos) = mean(data(((pos-1)*fact) + 1 : 1 : ((pos)*fact)));
    end
end

function ret = avg_top(src_image, start_idx, lines)
    part = src_image(start_idx:1:start_idx+lines, :);
    ret = sum(part) ./ size(part, 1);
    ret = transpose(ret);
end

% draw plots for a dark frame %
ob_corr_plot('D:\Users\g3gg0\Pictures\Kamera\2014_03_05\NO8A3083.pgm');

pause;


which produced this plot and a corr() of 0.95883 for the noise and some data from the top OB area.
dont use the topmost 44 lines. use line 44 to 79 and average them.

does it work better?
http://upload.g3gg0.de/pub_files/16261ae822934c9175945a5e34cba116/plot.png
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

1%

QuoteMy subjective opinion is that removing the FPN can improvement in visual quality significantly.

For raw video in less than lit conditions its the difference between usable shots and garbage...

Audionut

Quote from: a1ex on March 05, 2014, 05:19:47 PM
It does - the random component decreased 8 times. That's why the FPN became more destructive.

The ratio between FPN stdev and overall noise stdev should be a good indicator of how badly the FPN affects the image.

I was attempting to discuss the visual aspect of the FPN itself.  So we can say, the Gaussian noise reduced 8x, and the FPN reduced 2x, the destructive nature of the FPN became more apparent, because it's percentage in the image, increased as a result.
For large changes in the percentages, this should be an accurate subjective measurement.

But we don't know exactly what happened to the FPN.  Since stdev is basically an average, it doesn't (accurately) describe what happened on the edges of the FPN.  Did the edge contrast of the FPN increase?  See bottom.


Quote from: a1ex on March 05, 2014, 05:19:47 PM
If I understand the question well, the FPN analysis from raw_diag does exactly this.

Excellent.  I was misunderstanding how to determine its results earlier.

Quote from: a1ex on March 05, 2014, 05:19:47 PM
If you subtract the two FPN components, estimated by averaging, you end up with this.

Problem: you can't do that on a dark frame and then use that data to correct a useful image, because the FPN is not correlated between images (well, the autocorrelation is extremely weak, as you can see from the "xcov" analysis in raw_diag).

For post processing, the correlation is important.  In terms of, adjust this register, did FPN reduce or increase?  It's not so important.

Quote from: naturalsound on March 05, 2014, 10:22:28 PM
I repeated my test with 10pixels (which should be about 80 pixels of OB), resulting in slight (but to my eye pleasing) vertical FPN reduction.
The effect on horizontal FPN is negligible.

It softens the edge detail of the FPN.  This is important, because contrast is a determining factor in subjectivity.

http://en.wikipedia.org/wiki/Contrast-to-noise_ratio

It is the contrast of this FPN, that makes it a noise component of images.

Let's consider the images at reduced size to emphasise the edge detail.



Because these images are made up of only 3 components, FPN, Gaussian, and black, the differences in the images might not be considered overly great.

However, with the forth component present in the images, wanted detail, the difference would be subjectively greater.  Where we reduce edge detail/contrast of the noise, the edge detail/contrast of the wanted detail becomes more predominant.

The contrast ratio of wanted detail becomes greater then the contrast ratio of the noise.

AdamTheBoy

I've been following this thread and please forgive me if the answer is present somewhere but I was curious if this increase in dynamic range will be applicable to h264 videos.  If so, this development combined with the recent bitrate hack being investigated here http://www.magiclantern.fm/forum/index.php?topic=4124.0 is really exciting for 5d3 owners who want to shoot h264 for a project.

Audionut

I have had a quick play with it here, and I could see extra highlight detail when adjusting the registers.

SpcCb

Just a question about calculating capability inside the camera (DIGIC I presume):

Will it be too much to imagine calculate the vertical 2D FFT harmonics of 'some' (dozen?) rows just top of the active area but in the OB, done the same process with OB area on left to find the horizontal 2D FFT harmonics, then do the invert FFT from this on the whole active area?

a1ex, you spoke about seconds for the FPN algorithm reduction; 2D FFT operations look faster to do, what do you think by doing this?

a1ex

Just take a look at how fast raw_diag is when processing the entire image (try the dark frame analyses).

a1ex

QuoteIt is apparent that for OB=40 the FPN may even get worse.

Even if you take a single line from top OB, as long as there is some weak correlation between that line and the FPN, you can still use it to reduce the FPN. The key is not to subtract it blindly, just give it a lower weight.

How much?

Let's look at the Kalman filter theory: http://robocup.mi.fu-berlin.de/buch/kalman.pdf

From page 3, if we know how noisy our estimations are, the optimal weights are inversely proportional with the noise variances. For averaging 2 random variables x1 and x2, the optimal weights can be simplified to var(x2) and var(x1):


x_optimal = (x1 * var(x2) + x2 * var(x1)) / (var(x1) + var(x2))


where var(x) = std(x)^2.

So, if we average a single line from the top OB, we need to know how noisy is that line and how bad our FPN is. Assumming Gaussian noise with variance vg = sg^2 + Gaussian FPN with variance vf = sf^2, our OB line estimates the FPN with a variance equal to vg. If we simply assume the FPN is zero, the error is a Gaussian with variance vf. Combining these two estimations optimally means simply downweighting the top OB line:


fpn_estimated_from_one_line = vf / (vf + vg) * that line + vg / (vf + vg) * 0


If we average n lines to estimate the FPN better, its error will have a variance equal to vg/n; that is, the stdev of our estimation will be sg/sqrt(n). To get minimal variance in the corrected FPN, the optimal weighting is now:


weight_avg = vf / (vf + vg/n)
weight_fpn = (vg/n) / (vf + vg/n).
fpn_estimated_from_n_lines = weight_avg * mean(n_lines) + weight_fpn * 0


What will be the variance of our corrected FPN?


var_combined = var_avg * weight_avg^2 + var_fpn * weight_fpn^2


where var_avg = vg/n and var_fpn = vf.


=> var_combined = vg/n * vf^2 / (vf + vg/n)^2 + vf * (vg/n)^2 / (vf + vg/n)^2
=> var_combined = (vg/n * vf^2 + vf * (vg/n)^2) / (vf + vg/n)^2
=> std_combined = sqrt(var_combined)


Now let's check the theory.

Let's say we have a 720x480 dark frame that contains a random noise of stdev sg=0.6 and a vertical FPN of stdev sf=0.25 (values typical for a downsampled ISO 100 pulled from 200).


sg = 0.6;                                                       # Noise stdev
sf = 0.25;                                                      # FPN stdev
vg = sg^2; vf = sf^2;                                           # Variances
G = randn(480,720) * sg;                                        # generate Gaussian noise
f = randn(1,720) * sf; F = ones(480,1) * f;                     # generate Gaussian FPN
GF = G + F;                                                     # combine both noises => ideal dark frame
imshow(GF, [])
print -dpng -r60 ideal-dark.png





# try to average n lines
for n = 1:80
    fe = mean(GF(1:n,:)) * vf / (vf + vg/n);                    # FPN estimated and weighted optimally
    S(n) = std(f-fe);                                           # experimental stdev of corrected FPN
    SI(n) = sqrt((vg/n * vf^2 + vf * (vg/n)^2) / (vf+vg/n)^2);  # theoretical stdev of corrected FPN
    SR(n) = std(f - mean(GF(1:n,:) * (rand*2)));                # try some other random weights between 0 and 2
end

plot(SI/sf, 'r-o', 'markersize', 2), hold on
plot(S/sf, 'b-o', 'markersize', 2)
plot(SR/sf, 'kx', 'markersize', 2)
axis([0 80 0 1]), grid on
set(gca, 'ytick', 0:0.25:1);
set(gca, 'position', [0.2 0.15 0.7 0.7])
legend('Theoretical improvement', 'Simulated improvement', 'Random weighting', 'location', 'southwest')
title('FPN improvement in ideal conditions')
xlabel('Number of lines averaged')
ylabel('stdev(corrected FPN) / stdev(original FPN)')
print -dpng -r70 ideal-fpn-improvement.png




So, this graph shows:

1) this weighting is indeed optimal (all the random weights resulted in worse improvement)
2) we have an upper bound for the improvement that can be obtained by averaging some lines

=> to reduce the FPN component by 1 stop, we need to average 17 lines out of 480 for this numerical example.

How much is 1 stop of FPN improvement? (note the Gaussian noise is not touched)


fe = mean(GF(1:17,:)) * vf / (vf + vg/n);                    # FPN estimated from 17 lines and weighted optimally
std(f-fe) / std(f)                                           # experimental improvement for FPN
ans =  0.54244
FE = ones(480,1) * fe;                                       # extend the correction to the entire image
clf, imshow(GF-FE, [])                                       # subtract the estimated FPN
print -dpng -r60 ideal-corrected-1ev.png




To be continued; feel free to try to apply this theory to a practical example.

g3gg0

@alex:

today i sat in front of matlab with a colleague and we tried to get some more information from the top OB area.
he had the idea to do a singular value decomposition for pattern recognition. we implemented it based on the example image generator and built a script for it.
the result was visually pleasing - could you check if it is improving things? thanks :)


sg = 0.6;
sf = 0.25;
vg = sg^2; vf = sf^2;
G = randn(480,720) * sg;
f = randn(1,720) * sf; F = ones(480,1) * f;
GF = G + F;

% the error seems to be with main energy in 1st order %
max_order = 1;
num_plots = 1+max_order;

% plot original image %
figure;
subplot(num_plots,1,1);
imagesc(GF);

% only use top 40 pixels for correction %
top = GF(1:40,:);
top_mean_free = top-ones(size(top))*mean(mean(top));

% run a singular value decomposition (major component analysis) %
[A,B,C] = svd(top_mean_free');

% start correction %
GF2=GF';
for order=1:max_order
   % get deviation pattern of n-th order %
   dev = A(:,order);
   % scale pattern with stddev of this pattern %
   corr_vec = sqrt(B(order,order)) * dev;
   % remove mean to make it mean-free %
   corr_vec_mean_free = corr_vec - ones(size(corr_vec)) * mean(corr_vec);
   
   % to detect the pattern polarity we do a projection of our pattern onto our top OB area and check its mean value %
   dir = sign(mean(corr_vec_mean_free' * top_mean_free'));
   
   % now we can add/subtract that pattern from image data %
   GF2 = bsxfun(@minus,GF2,corr_vec_mean_free * dir);
   
   % and show the result %
   subplot(num_plots,1,1+order);
   imagesc(GF2');
end
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

a1ex

It simply finds the same optimal solution with a different method.

Result from your program:

std(mean(GF2')) / sf
ans =
    0.3525


My theoretical result for n=40:

n=40;
sqrt((vg/n * vf^2 + vf * (vg/n)^2) / (vf+vg/n)^2) / sf
ans =
    0.3548


In this ideal example, your principal component analysis summarized the top 40 lines into a single one (which is to be expected, because the signal we are looking for in these lines - the FPN - is the same in all these lines, and anything else besides this FPN is just Gaussian noise). What would be interesting is to try this one on a real OB. There I expect surprises (probably pleasant ones).

Marsu42

Quote from: a1ex on February 03, 2014, 10:45:10 PM
Updated adtg_gui: I've added a routine that dumps all the registers into a log file after taking a picture (this is what I've used to make the funky graphs). I'd like to get these graphs for all cameras, so I need your help

Ok, here you go for 6d: https://bitbucket.org/Marsu42/ml-aiso/downloads/adtd-logs-6d.zip

On another note: The above research is absolutely impressive and all and no doubt this will result in an optimal usage of the Canon sensor, even though the complicated re may stall the work from time to time. But, speaking with my simple user's hat on...

... I would be most grateful if we'd get a working, but not optimal or final mini_iso.mo for latest core and with working white point adjustment for 6d. This is because every day w/o this, I'm losing some dr in my shots, and this is sad to know :-o

a1ex

Well, the next step towards mini_iso is a library for patching things around in Canon code. Around 70% of it is mini_iso code refactored to be generic, and the rest is a small GUI frontend to see what exactly got patched.

That's because mini_iso needs to patch a LOT of memory addresses, and I want to do it cleanly, with sanity checks before patching, integrity checks at runtime, and, very important to me, undo all patches when you disable the menu. This doesn't happen with iso_regs.

This could be reused for a cleaner implementation of things like bitrate hacks from Tragic Lantern, since this library would help with all the dirty stuff like how to undo the patches, when to enable the cache hacks, and will also let the user what hacks are enabled at any given time. When shooting something a little more serious, I want to know for sure what patches are applied, and that stuff turned off from menu is indeed turned off.

In my local copy I already refactored dual ISO and raw_rec/mlv_rec patches to be applied via this library.

Marsu42

Quote from: a1ex on March 16, 2014, 02:07:31 PMIn my local copy I already refactored dual ISO and raw_rec/mlv_rec patches to be applied via this library.

Sounds great and I'm a big fan of safe and clean code (because even as a module coder, I sometimes have to look at it :-)).

I just wanted to add my 2ct not to let perfectionism stand in the way of pragmatism for a working mini_iso, just like Linus Torvalds always says about Linux I feel ML is not a research project, but made to be actually used :-o

a1ex

And exactly for this reason I want mini_iso to just work.

hjfilmspeed

Im not a coder by any means but I liken the finely tuned modules to a song and artist is writing. He or she wont release it until all of it is perfect and flows just right. These modules/code are the coders babies, with all the work they put in to them Im sure they want them to be flawless before they are public or beta ect. But its hard for us non coders to be patient with such incredible features. Cant wait for this mod!!!

Marsu42

Btw when this code goes public, it would be very nice to have an external function to get the optimal overexposure ec from the mini_iso calibration. This way autoexposers like autoexpo or auto_iso that work in m mode can automatically modify the exposure accordingly.


int get_mini_iso_calibration(int iso_canon8)
{
    int ec_canon8;
    ...
    return ec_canon8; // ec value in canon steps (0/3/4/5/8) 
}

Greg

ADTG[7] 0x4046 -> 0x4036

It is darken the image by 0.5 EV 500D.
White level and black level remains unchanged.

Other records ADTG change the white level or black level.

kgv5

I havent been here for a while, what is current status of the video mode on the 5d3? Is it still 0,1 EV (much less than in stills mode) like in the description on the first page or maybe there was some improvement in this matter?
www.pilotmovies.pl   5D Mark III, 6D, 550D

Audionut

I'm pretty sure I was seeing some good changes with the latest iso_regs module.  But there didn't appear to be an easy way to see the OB area of raw video, and I don't shoot video, so I gave up very easily.  ;)

If someone points out how to see the OB area of raw video, I'll take another look.

Audionut

I was pondering something today.

Canon ISO 100/1600

Input file      : _46a0505.cr2
Camera          : Canon EOS 5D Mark III
White levels    : 15179 13789
Noise levels    : 11.05 6.48 6.54 10.98 (14-bit)
ISO difference  : 3.95 EV (1550)
Dynamic range   : 10.99 (+) 10.05 => 14.01 EV (in theory)
Semi-overexposed: 0.89%
Deep shadows    : 96.88%
ISO overlap     : 4.0 EV (approx)
Noise level     : 37.21 (20-bit), ideally 37.22
Dynamic range   : 14.46 EV (cooked)



ML ISO 100/800

Input file      : _46a0504.cr2
Camera          : Canon EOS 5D Mark III
White levels    : 15645 14250
Noise levels    : 6.49 3.95 3.96 6.40 (14-bit)
ISO difference  : 3.00 EV (799)
Dynamic range   : 11.75 (+) 10.88 => 13.87 EV (in theory)
Semi-overexposed: 0.82%
Deep shadows    : 95.91%
ISO overlap     : 5.8 EV (approx)
Noise level     : 42.41 (20-bit), ideally 42.41
Dynamic range   : 14.32 EV (cooked)


ML ISO 100, is actually Canon ISO 200, with the highlight saturation point adjusted.  In the midtones/shadows, it behaves exactly like Canon ISO 200, but the rated ISO gets shifted down to ISO 100, as per DxO's definition.  This is how the DR increases with ML ISO.  The midtones/shadows stay the same, but it increases highlight capturing ability.

ML ISO 800, is actually Canon ISO 1600, with the highlight saturation point adjusted.  So, the shadow point of ML ISO 800, is actually equal to Canon ISO 1600.

The DR of ML ISO 100, is 0.8 EV greater then Canon ISO 100.

So, we gain 0.8 EV of midtone overlap, from the cleaner ISO, and we gain another 1 EV of midtone overlap, because we are only 3 stops apart (Canon ISO 200/1600), instead of 4 stops apart (Canon ISO 100/1600).


a1ex

Quote from: a1ex on March 16, 2014, 02:07:31 PM
Well, the next step towards mini_iso is a library for patching things around in Canon code.

The first step was made: https://bitbucket.org/hudson/magic-lantern/pull-request/476/experimental-library-for-managing-memory

As a second pre-requisite, I would also like a tool for pre-processing raw files - in this context, I need it to reduce the FPN. Without it, the noise improvement will have little practical value if the FPN will stay at the old levels. And since I wasn't able to find a way to fix the FPN in camera, the quickest solution would be a software tool similar to cr2hdr (which will most likely output a DNG file).

Advantage: the FPN fix is also needed for some regular ISOs, for example here, so the same tool could do the job in both cases.

Marsu42

Quote from: a1ex on April 22, 2014, 06:17:30 PM
The first step was made: https://bitbucket.org/hudson/magic-lantern/pull-request/476/experimental-library-for-managing-memory

Great there's progress on this!

Quote from: a1ex on April 22, 2014, 06:17:30 PM
As a second pre-requisite, I would also like a tool for pre-processing raw files - in this context, I need it to reduce the FPN. Without it, the noise improvement will have little practical value if the FPN will stay at the old levels. And since I wasn't able to find a way to fix the FPN in camera, the quickest solution would be a software tool similar to cr2hdr (which will most likely output a DNG file).

Doh, I believe you this cannot be done in camera, but another cr2hdr-like stop adds a lot of overhead - it would be important that this at least a lossless conversion so the original cr2 can be deleted instead of dual_iso double storage space requirements. I'd also like to note that ending up with dng instead of cr2 makes use of dxo's prime noise reduction impossible, that's why I recently switched from dng to preserving cr2 again. If it's possible, an option to patch the original cr2 would be welcome.