Dealing with Focus Pixels in raw video

Started by dfort, October 22, 2015, 11:09:10 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

dfort

@kayman1021 - if you use the image tags you can embed the image in your post.

[img]https://flic.kr/p/2fThTrf[/img]

Better yet, embed it so clicking on the image will allow readers to view and download the full sized image.

[url=https://flic.kr/p/2fThTrf][img]https://live.staticflickr.com/65535/47838123412_68485cf137.jpg[/img][/url]



Here's with the .badpixel file:



Nice find separating B,G1,G2,R values.



Oh yeah--there they are, all over the image!



Interesting that other cameras, even phone cameras, show focus pixels in raw images.

I've been mainly just mapping the focus pixels. Developers working on apps to process MLV files have been using the map files in their programs and are using various methods to fill in the focus pixels. It would be great if you could share your algorithm with them. I'm not sure where to look in focuspixeltoolxiaomi - I take it that this is a Windows executable file?

kayman1021

Thank you @dfort

At the current state my program is only able to separate the fields(and comb them back) by reordering columns and rows. The methods used can be found in Form1.cs under DeinterlaceVertical(),DeinterlaceHorizontal(),InterlaceVertical(),InterlaceHorizontal() names.
They take a two dimensional integer array as input, and they overwrite the input after the transformation.
Yes, it is a Visual Studio 2017 C# windows application project.

It if easier for me to spot focus pixels this way, but it makes harder to make actual focus pixel files. Final coordiantes would be 2X+offsetX; 2Y+offsetY ==> 2X+0; 2Y+0 for my pixels in blue area, blue being 0;0 in my BGGR sensor pattern.

I've separated the testfiles from the sorce code and the build program and made it easier to read. They can be found in the shared folder. https://drive.google.com/drive/folders/1I9eSsfgxjZnuUc3sbdGCCVxIzhCa0MBX

Made some user interface too in p00_v2.zip.
My aim is to be able to import testfile3 and 4's data. They were shot today, with Danne's most recent firmware. TF3 is lossless 14bit, TF4 is uncomressed 14bit.
EOS 100D, Xiaomi A1

Rewind

In order to evaluate how focus dots removal works in popular raw processors (MLVFS, MLV App, MLV Producer) I'm using this improvised test chart:


14 bit raw video (1736 x 976) shot on Canon 650D with mlv_lite module in non-crop 1080p mode.
PDR (Pink Dot Remover tool) with updated dot data and altered interpolation algorithm used as a reference guide.


A couple of observations and suggestons:

1. When shooting 14 bit raw in non-crop mode, focus pixels are located only in center part of the image.
No matter what AF points pattern used, what AF method selected in Canon menu or AF disabled, all the focus pixels are concentrated in central 290 pixels height area:


Therefore, assuming focus pixels all over the frame (like actual fpm tables does) we introduce color artifacts in areas which were clean originally. This behavior should be avoided:


Suggestion: let's update fpm tables the way they don't affect the top and bottm parts of the image.
Focus pixels should be treated only in 290 pix height central area in order not to ruin the originally clean image.
Above is applicable to 650D and may vary by camera models. If this is the case, may be we should detect camera model and limit the FP affected area accordingly, or even let user decide where to remove those pixels by dragging some selection area.

2. Interpolation algorithm works better if takes into account only horizontal and vertical neighbour pixels, avoiding diagonals.
Let's call it "cross" method for now. I've already mentioned this back in 2013, and this idea may sound strange at first, but here are some fresh examples. Judge for yourself:



While this is obviously exaggerated extreme test (although shooting let's say a book page in movies is not so rare), almost all real-world scenarios became a bit cleaner and seems more calm, when this "cross" method used.
So my second suggestion is: Let's introduce this interpolation method as an option in UI, so the user may decide which one is better in a given situation.
This applies to MLVFS and MLV App. MLV Producer uses its own method which is better and very close to what PDR does (is it the same?)

DNG's used for examples:
Original
Treated by MLV App / MLVFS
Treated by PDR

Modified interpolation algorithm explanation and code (java)

dfort

Quick reply--I'll try to get more in depth later.

You can get the map files that cover only the area you're concerned with by either taking the map files from the current MLVFS or rollback to an early version of my ML Focus Pixels repository.

It would be interesting to compare this with the algorithm kayman1021 is working on because it doesn't involve any pixel interpolation.

kayman1021

Hi again
I am finally able to open 14bit files, well, more like 16 bit. Basically, MLV App exports 10-14bit files as 16 bit so i can open all of them.

I started to experiment with dual iso files. I wrongly assumed row would be ordered like this:



Instead it looks like this:



I'd like to ask for a Dual ISO MLV/DNG sample, that was correctly opened and interpolated in MLVApp. I am pretty sure my camera settings were wrong, as i can see iso rows are turning opposite by every frame, DLLDDLLDDLLD and next next frame is LDDLLDDLLDDL(dark and light rows)



Flickr can't display this dng so it's a drive link: https://drive.google.com/file/d/1G8QzjHnQI2PRv1KnzVCr2Af-XsFvtDyZ/view?usp=sharing
EOS 100D, Xiaomi A1

dfort

Quote from: kayman1021 on May 26, 2019, 06:59:18 PM
I'd like to ask for a Dual ISO MLV/DNG sample, that was correctly opened and interpolated in MLVApp.

Here you go, trimmed it down so it isn't a huge download:

https://www.dropbox.com/sh/luceq91ux12qa9v/AADISk91Yr4n5kCK-3rnYOGKa?dl=0

kayman1021

Thank you @dfort
I've managed to seperate ISO fields, then seperate the color fields in all 4 frames.
I've noticed the lines are marching with every frame. It can be seen in MVLApp by zooming well into the top, disabled raw correction, and with "none" demosaic. Pressing next/prev frame will show the effect. Not sure if this is intended.
DDLLDDLL==>DLLDDLLD==>LLDDLLDD==>LDDLLDDL
Could it be the crop window is moving down? Even if so, the focus pixels stay in the same place across the frames.

I've made 2 .xcf files, they have the frames in them as layers.
levels adjusted
raw values

Btw just an idea, if we make a pixel map in this separated mode, and we make the "opposite" instructions in reversed order, we should get a pixel map for the original, unmodified picture.

Regular:
Load dng, deinterlace, use this image to make deinterlaced pixelmap. Load back created pixelmap, deinterlace-->profit

Dual iso:
Load dng, separate iso fields, deintelace top/bottom iso field, use this image to make deinterlaced pixelmap. Load back created pixelmap, intelace top/bottom iso field, make iso field alternating again--> profit

Maybe i am able to program pixelmap loading and exporting till weekend.
If these steps implemented without error, next steps would be experiment with values laying on pixelmap coordinates.
Maybe these values are just offset, maybe they are from another iso level.
If this turns out to be a dead end, maybe i'll try interpolating techniques, as last resort.

current program status



EDIT:
I also noticed lines in the overexposed areas in higher iso fields. They align perfectly in top/bottom direction, not sure what this is.

EOS 100D, Xiaomi A1

dfort

I never went too deeply into interpolation methods but can respond to this:

Quote from: Rewind on May 25, 2019, 09:38:37 AM
1. When shooting 14 bit raw in non-crop mode, focus pixels are located only in center part of the image.
No matter what AF points pattern used, what AF method selected in Canon menu or AF disabled, all the focus pixels are concentrated in central 290 pixels height area:
...
Therefore, assuming focus pixels all over the frame (like actual fpm tables does) we introduce color artifacts in areas which were clean originally.

This is true for 14-bit (lossless compression and uncompressed) but not necessarily true for reduced bit depth with lossless compression. Tests have shown focus pixels can show up in areas outside of this well defined area. Note that in the case of the 100D and EOSM2 that well defined area is much larger. In addition, the various crop_rec module settings will sometimes combine patterns and in order to clean them up it requires multiple passes.

I'm not really sure how these so-called focus pixels actually work, all I'm dealing with here is trying to remove them from the final image without destroying the quality of the image. I started this topic out of my frustration trying to get Pink Dot Remover (PDR) working for me. Now that @Rewind is pointing to a pull request from 2013 it pretty much confirms my suspicion that this app was abandoned years ago. That's too bad because had the interpolation algorithm been merged along with some updated pixel maps it might have had a longer life.

Having different interpolation choices as well as various map files to choose from might be something for the MLV processing app developers to consider. However, even with a very extreme map file some interpolation algorithms don't seem to introduce color artifacts. See some of the results posted on Reply #630 compared to using good old dcraw with the same map file (a.k.a. .badpixel file) on Reply #634.

One issue that seems to affect all pixel interpolation methods is that high contrast boundaries, Dual ISO or hitting the edges of the frame will confuse the algorithms. Wonder how well the method @kayman1021 is developing will work compared to the various interpolation methods.

2blackbar

https://drive.google.com/open?id=1u-EaJ1fw9OPy25p3lLytR5Ors4Merj5v
I have pattern of focus pixels on CANON M that cant be fixed in MLVApp with map files in the same folder, not sure why, its non animated one.

dfort

Quote from: 2blackbar on May 29, 2019, 02:46:18 AM
I have pattern of focus pixels on CANON M that cant be fixed in MLVApp with map files in the same folder, not sure why, its non animated one.

What do you mean by non animated one? Seems to work fine over here. You sure you're using the latest map files and have them installed in the right directory?





2blackbar

Ill download them right now again, i was working on that pattern and extracted it just in case , it was quite easy on this shot.
http://picplus.ru/img/1905/29/302e0880.png
Looks like pixel maps arent fixing this pattern in MLVApp, chroma  smooth 3x3 or 5x5 does that.I have recent ones, fpm's extracted where mlvapp is.
I didnt saw this pattern in focus_pixel_image_files folder or im missed it somehow ?
How did you get it fixed in mlvapp exactly ? Also this was shot in 3x3 crop.I have other files that have this pattern(not exactly the same but similar one, shot in mcm rewire 16:9) but theyre not from 3x3 crop so this pattern is smaller, looks like this :
http://picplus.ru/img/1905/29/eca84c0b.png
https://drive.google.com/open?id=11wdVkGBnMoT2wqCsb3hhOJJeghrMi9IZ

dfort

The focus pixel map files are working fine on that one too. I gave a quick tutorial on how to add the latest .fpm files in Reply #641.

2blackbar

map files work only after i use chroma smooth, is this how they supposed to work? When i disable fixing pixel maps then chroma smooth hides focus pixels too on its own, am i doing it wrong or correctly ? How this pattern is fixed if its not in focus_pixel_image_files ?

kayman1021

https://www.magiclantern.fm/forum/index.php?topic=20025.msg216138#msg216138
"- Drop focus pixel map files into app to install (except Linux AppImage)"

I've never managed to see it work, but it seems to be the correct way to intall them
EOS 100D, Xiaomi A1

ThatMakerGuy

Hey. Superb work folks, thank you!

I figured how to add map files into MLV App on Linux. You'll need to extract appimage first with ./MLV.App.v1.7.Linux.x86_64.AppImage --appimage-extract. This will create a folder named "squashfs-root". Throw map files into it, then cd into it and run ./mlvapp *.fpm. Seems like you only need to do this once, after this you can just double click on executable.
On windows you need to drag map files onto the mlvapp executable itself, not the window of already running application.

Hope this helps, cheers!

dfort

Quote from: 2blackbar on May 29, 2019, 09:37:28 AM
map files work only after i use chroma smooth, is this how they supposed to work?

No, it should work without having to use chroma smooth. Seems like there might be an issue with the Windows version of MLV App or maybe it is your installation? I'm using the Mac version, I'll try to check it out on Windows when I get some quiet time.

kayman1021

@dfort Can i ask for the full length averaged mlv file of the beach scene?

I've mapped ~8700 pixels so far, these were quite visible.
I averaged the 4 frames you gave me, it added another ~20000 additional pixels to the map

Also, is there any way to see which fpm file is in use in MLVApp?
EOS 100D, Xiaomi A1

2blackbar

Quote from: ThatMakerGuy on May 29, 2019, 02:33:21 PM
Hey. Superb work folks, thank you!

I figured how to add map files into MLV App on Linux. You'll need to extract appimage first with ./MLV.App.v1.7.Linux.x86_64.AppImage --appimage-extract. This will create a folder named "squashfs-root". Throw map files into it, then cd into it and run ./mlvapp *.fpm. Seems like you only need to do this once, after this you can just double click on executable.
On windows you need to drag map files onto the mlvapp executable itself, not the window of already running application.

Hope this helps, cheers!
It made no difference i can still see focus pixels until i use chroma smoothing 3x3

dfort

Quote from: kayman1021 on May 29, 2019, 03:38:57 PM
...is there any way to see which fpm file is in use in MLVApp?

I don't think MLV App shows you all of the metadata. MLVFS will show which map it is loading if you run it from the terminal. The easiest way to figure out which map file is being used is by looking at the metadata with mlv_dump:

mlv_dump -v EOSM_dual_iso.MLV | less
Block: RAWI
...
      height           1951
      width            1808
...
Block: IDNT
...
     Camera Model:  0x80000331


It is using 80000331_1808x1951.fpm

Uploading full length files of the palm trees. I shot both normal and dual_iso so I'm giving you both. These are large files so give it some time if you don't see them right away. Let me know when you got them so I can free up some space.

https://www.dropbox.com/sh/luceq91ux12qa9v/AADISk91Yr4n5kCK-3rnYOGKa?dl=0

kayman1021

thank you, done downloading
1 averaged frame would have been fine
EOS 100D, Xiaomi A1

dfort

@kayman1021 - Didn't read your post properly. I gave you the full original camera files.  :P

@2blackbar - Just tried your test file of the airplane on my Windows system with MLV.App.v1.7.Win64.static and it worked right out of the box without having to install the map files. Installing the map files was as easy--just dropped them onto the MLVApp.exe icon.

Note that MLV App has embedded code that will dynamically create the focus pixel map file if possible. With all these new crop_rec settings and resolutions it is hard for me to keep up and I haven't had the time to update the code yet. However, it is easy enough to make new map files. MLV App will use the installed map file instead of dynamically creating one if it can match it up using the metadata.

Long story short -- install the latest version of MLV App and report back.

kayman1021

Okay, false alarm about the 20000 focus pixels.
Turns out in MLVApp any kind of Raw Correction affects exported DNG.
Fix Focus Dots was off(the entire Enable Raw Correction off) for the single frame export.
Fix Focus Dots was on for the averaged frame export. It caused massive amount of pixels in this dual iso file.
EOS 100D, Xiaomi A1

2blackbar

I did all that, removed magiclantern MLVAPP from regedit so its clean install, dragged pixel map files to mlvapp exe and still the same , dots are visible until i hit chroma smooth 3x3.Im on Windows 7 if that changes anything.

Rewind

I have also considered the idea that values of focus pixel are just some kind of offset from original values.

It seems obvious at a glance, since FP pattern is brighter on highly exposed images and darker on underexposed. There are different types of their behavior though. Offset can be positive or negative, and its absolute value quite differs (i.e. central part is usually brighter) But the overall picture is clear. At this point we know that some kind of correlation exists for sure:


In order to verify this idea I shot the evenly lit grey card with the series of exposures from underexposed to fully overexposed, to see if this offset remains constant through the whole dynamic range of sensor.

The idea was simple: since I shot the uniform grey card, I can easily restore original value almost exactly, by averaging the neighbor "healthy" pixels of the same color. So I wrote a small utility that reads dng files, maps the focus pixels and plots the graphs for actual values in relation to what values should be as if they were usual pixel.
I got different types of graphs: most of them are almost linear, others have a noticeable roll-off in highlights, the curve slope differs from row to row:

(y-coord is the actual value of focus pixel, x-coord the original value that should be restored)

First obvious problem is that some focus pixels reaches the highest value very fast and then stay overexposed, therefore they do not carry any useful information after that exposure.
Ok, let's say we can use interpolation techniques for them, if they are overexposed. What about others? Hey, now we have all the data from graphs to restore original values, right?

Well, sort of. Or should i say "not at all".
I updated my script to do the opposite: read a dng file, take the previously saved graphs data for a given focus pixel, extrapolate it according to actual pixel value, and update the raw data in dng files.

When I processed the first test frame, I almost jumped from my chair with a joyful shout. But that was only beginning. The thing is the method kinda works only in the same lighting conditions, when shooting the same grey card, and with the same ISO.
Ok, I thought we can build the graphs for every ISO, that's not a problem. But what's the matter with the light?
I got a desk lamp with adjustable color temperature and went deeper. Changing the color temperature results in a noticeable value shift in FP pattern.

With further investigation by color overlay charts it turned out, that values of focus pixels depends not only on the intensity of the same channel, but the green as well.
For example, for a given red focus pixel its value would be:
R = R1 + G*x,
where R1 - the original value that should be restored, G - intensity of neighbor green pixels, x - some multiplier, different for each type of FP.
and for a pixel from blue bayer block: B = B1 + G*x

I updated my program to get these multipliers, store them alongside with the graphs data and to calculate and set the desired R1 and B1 values.

Now it seems to work better with any light color temperature. But only in blurred areas. As you may tell already, since the values are depending on green channel, to calculate them properly the green value should be averaged for this point somehow, but this turns into the same old problem: artifacts on contrast edges. Because the closest green pixel for a given point is, well, at least one pixel aside, its value may be on the other side of the edge. So this method requires some kind of interpolation to find proper green values.

Wait, interpolation? Ok, so we came back to where we started.
Unfortunately, this offset method still requires some sort of adaptive interpolation for all the focus pixels. For this reason it works well in smooth areas (where any regular interpolation works) and produces the artifacts of the same nature on contrast edges.

There's one thing though that still should yield better results: since green channel contains two times more information than red or blue, interpolating it should produce less artifacts.
But then i discovered that some pixels depends on all three values, e.g. R = R1 + G*x + B*y, so restoring them becomes a total mess. Enthusiasm diminished :))

By far, the modified adaptive interpolation (cross method I mentioned earlier) is the best approach:


The paper: Adaptive pixel defect correction

ilia3101