Dealing with Focus Pixels in raw video

Started by dfort, October 22, 2015, 11:09:10 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

dfort

@maxotics

Quite a post. A bit off topic but I'm pretty much in agreement.

I have been very close to pulling the trigger on a C100 to replace one of my EOSM's. Ok, just kidding but if I were to get serious about the gear I own, the price of the original C100 is currently the same as a 5D mark iii ($2,499 at B&H). Just a step up is the C100 Mark II which is close to the same price as a 5D mark iv. The advantage of the C100 for me is that Magic Lantern isn't available for that camera so I would probably spend less time hacking and more time shooting. What is holding me back from getting one is that my friends who shoot on Arri cameras most of the time tend to look down on the low end Canon cine cameras.

I've shot a short using a 5D3 with MLV raw video. The workflow and data wrangling is a PITA compared to working with the MOV files straight out of the camera. Other ML users will probably disagree with me but the images from the C100 are about as good as ML raw. Of course you don't have as much control as you do with raw but I used to be a photographer when everything was shot on film (before Photoshop) and had to get it right in the camera.

Back on topic, the goal of this project was to eventually shield users from ever having to deal with focus pixels. MLVFS is doing a good job of that now. I haven't tried MLV Producer recently (I'm on a Mac) but have been hearing some good things about how it handles focus pixels. The big challenge will be getting this working in mlv_dump.

Danne

When treating a darkframe as a flatframe in mlv_dump you,ll get this revealing output.

dfort

That reminds me of the image file I was able to pull off the focus pixel map from the PinkDotRemover tool 650D:



I have never seen a single frame that displays all of the focus pixels so that may mean that using a previously saved dark frame as a focus pixel map might not work all of the time. It might not even work from the beginning to the end of a take because lighting, sensor temperature, etc. can change while the camera is rolling. My understanding from the astrophotograhy forum posts I've read is that a dark frame should be taken under the same conditions (especially sensor temperature) as the shot that you are going to be applying it to.

Danne

darkframe is working in shadows and the best output is achieved with chroma smooth 2x2. The more or less perfect result is to use the focus pixel maps as well as darkframe and cs2x2.

martinhering

I am working on a method of removing focus pixels myself and I thought I post the pixel map for the Canon EOS M here, just in case there is anybody who can make use of it with the usual tools. I did not try myself. I have a method of extracting the focus pixel map from actual MLV files. All I need is a short recording with the highest possible resolution in fullframe and crop mode. If anybody can provide that for other cameras, I can extract the map and most it here.

https://www.dropbox.com/s/sdqopv87n6w9us6/EOS_M_FocusPixels.zip?dl=0

If you make a recording, remove the lens and point the camera to the sky. Then expose the image so that you have 98% white on screen. The white should not clip.
5D Mark III, EOS M, 700D

Danne

How did you build them? Any code to check out?

martinhering

I am extracting the red, green and blue channel from the raw_buffer and saving them as 8-bit grayscale tiff files. Then I use photoshop to change the colors. Pretty straight forward.

Here's the code. Don't know if that's of any use:


    size_t halfW = _rawInfo.width/2;
    size_t halfH = _rawInfo.height/2;

    CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGrayGamma2_2);
    size_t bytesPerPixel = 1;
   
    CGContextRef redBP = CGBitmapContextCreate(nil, halfW, halfH, 8, halfW*bytesPerPixel, colorSpace, kCGImageAlphaNone);
    uint8_t* redPtr = CGBitmapContextGetData(redBP);

    CGContextRef blueBP = CGBitmapContextCreate(nil, halfW, halfH, 8, halfW*bytesPerPixel, colorSpace, kCGImageAlphaNone);
    uint8_t* bluePtr = CGBitmapContextGetData(blueBP);

    CGContextRef greenBP = CGBitmapContextCreate(nil, halfW, _rawInfo.height, 8, halfW*bytesPerPixel, colorSpace, kCGImageAlphaNone);
    uint8_t* greenPtr1 = CGBitmapContextGetData(greenBP);
    uint8_t* greenPtr2 = greenPtr1 + halfW;

    register int32_t x, y, yadj, xadj, gadj;
   
    yadj = (_rawInfo.cfa_pattern == 0x01000201) ? 1 : 0;
    xadj = (_rawInfo.cfa_pattern == 0x01020001) ? 1 : 0;
    gadj = (_rawInfo.cfa_pattern == 0x02010100) ? 0 : 1;

    for (y=0; y<_rawInfo.height; y+=2) {
        for (x=0; x<_rawInfo.width; x+=2) {
            int32_t r = get_raw_pixel(&_rawInfo, _rawBuffer, x+xadj, y+xadj);
            int32_t g1 = get_raw_pixel(&_rawInfo, _rawBuffer, x+1-gadj, y);
            int32_t g2 = get_raw_pixel(&_rawInfo, _rawBuffer, x+gadj, y+1);
            int32_t b = get_raw_pixel(&_rawInfo, _rawBuffer, x+1-xadj, y+1-yadj);

            uint8_t r8 = raw_to_8bit_linear(r, &_rawInfo);
            *redPtr = r8;

            uint8_t b8 = raw_to_8bit_linear(b, &_rawInfo);
            *bluePtr = b8;

            uint8_t g18 = raw_to_8bit_linear(g1, &_rawInfo);
            *greenPtr1 = g18;

            uint8_t g28 = raw_to_8bit_linear(g2, &_rawInfo);
            *greenPtr2 = g28;

            redPtr++;
            bluePtr++;
            greenPtr1++;
            greenPtr2++;
        }

        greenPtr1 += halfW;
        greenPtr2 += halfW;
    }

    CGImageRef redImageRef = CGBitmapContextCreateImage(redBP);
    NSImage* redImage = [[NSImage alloc] initWithCGImage:redImageRef];
    [[redImage TIFFRepresentation] writeToFile:@"~/raw_red.tiff" atomically:YES];

    CGImageRef blueImageRef = CGBitmapContextCreateImage(blueBP);
    NSImage* blueImage = [[NSImage alloc] initWithCGImage:blueImageRef];
    [[blueImage TIFFRepresentation] writeToFile:@"~/raw_blue.tiff" atomically:YES];

    CGImageRef greenImageRef = CGBitmapContextCreateImage(greenBP);
    NSImage* greenImage = [[NSImage alloc] initWithCGImage:greenImageRef];
    [[greenImage TIFFRepresentation] writeToFile:@"~/raw_green.tiff" atomically:YES];


So far I only found focus pixels in the red and blue channel.
5D Mark III, EOS M, 700D

dfort

Interesting stuff. I have never seen a single frame that displays all of the focus pixels. Could you show us what the tiff files look like? So you're using Photoshop to remove the focus pixels by changing their color?

If you skimmed through this topic you'll find out that there are basically just two different patterns. The EOSM/650D/700D all share the same pattern and the 100D is different. I suspect that the EOSM2 shares the same pattern as the 100D because it is using the same sensor. Of course there's no ML port for that camera so it is just theoretical.

By the way, since you have an EOSM have you tried the crop rec module with it? It was a challenge figuring out the focus pixels for that video mode.

martinhering

Here are the red and blue tiffs:

R: https://www.dropbox.com/s/3430fvt3dufjlx3/1600x540_red.tiff?dl=0
B: https://www.dropbox.com/s/ffzkeucgwcgbcul/1600x540_blue.tiff?dl=0

and here's the corresponding MLV:
https://www.dropbox.com/s/ro5n385c29xtjxz/1600x540.MLV?dl=0

I found out that you basically need to record one map for fullframe mode and one for crop mode as the pixel layout seems to differ. The map for a particular resolution can be deferred from the map with the highest resolution as the frame is centered around the sensor center point. I'll implement a correction function in the next days that'll take the pixel map and interpolate the bad pixels. Should work.
5D Mark III, EOS M, 700D

dfort

Thanks for the files, let's take a look.

Here is a frame of the MLV with the vibrance and saturation pushed to show the focus pixels. Never mind the color it is most likely way off.



From the looks of it this was shot in mv720 mode. The EXIF data shows that you used an EOSM.

Here is your red tiff--



and your blue tiff.



The first thing I did was to put those two tiffs in Photoshop layers to see how they line up. It doesn't seem right because the pixels are adjacent to each other in the x-axis.



I don't know how that happened but if you take a close look at the focus pixels in your shot you'll see that they are offset diagonally.



The pixel map that is working in MLP, cr2hdr.app, MLVFS and MLV Producer uses a pattern that hits all of the known focus pixels. Here it is superimposed over your image:



When your shot is run through one of the apps that can deal with the focus pixels using this map file, the focus pixels do not appear in the final image.

Another thing to point out is that your adjustments might work for that one shot as long as the lighting doesn't change. There are a lot of focus pixels that are not represented in your tiff files. I too had that cross shaped pattern for a while until a user showed me that there are focus pixels outside of that area. As far as I can determine the focus pixels on the EOSM/650D/700D have a pattern that looks like this:



That map will cover all of the possible focus pixels that might appear in all lighting situations.

What would really help is a better way to blend the pixels. Your idea of changing the colors is interesting, are you interpolating just one color channel surrounding a focus pixel?

A problem with blending surrounding pixels comes up when there are high contrast sharp lines like in this shot from a 100D:



I'm not sure what will work for situations like this. What helps is to combine chroma smoothing along with the focus pixel map but even that isn't enough in some cases.

Quote from: martinhering on March 02, 2017, 09:23:20 PM
I found out that you basically need to record one map for fullframe mode and one for crop mode as the pixel layout seems to differ. The map for a particular resolution can be deferred from the map with the highest resolution as the frame is centered around the sensor center point.

If you read through this topic you'll find out that the maximum area is represented by the full raw buffer which you can capture using the silent picture module. MLV files have crop metadata which shows where on that full raw buffer the image is located. Don't assume that it is centered.

There are more than those two modes, though most EOSM users only use those two. There's mv1080 (not available on the EOSM unless you also record H.264), mv720, mv1080crop and zoom mode. The zoom mode is interesting because it uses a much larger full raw buffer and the image can move around the sensor. There is a "digital dolly" option that can actually move around the sensor dynamically so you need to look at the crop metadata for every frame. Oh, and with the experimental crop_rec module there are even more possibilities.

You are more than welcome to look through my focus pixel bitbucket project. The fpm.sh script has some comments you might find useful. Improvements are always welcome.

By the way, @bouncyball -- if you read this message please push those changes that added the -n switch to the script.

DeafEyeJedi

Quote from: dfort on March 03, 2017, 05:47:26 AM
...By the way, @bouncyball -- if you read this message please push those changes that added the -n switch to the script.

+1
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

bouncyball

Quote
...By the way, @bouncyball -- if you read this message please push those changes that added the -n switch to the script.

Done :)

dfort

Thanks!

That -n switch shows the uncropped raw buffer. Well, almost. It doesn't crop the upper left corner but it does crop the lower right. I'll look into it.

What it should do is to dynamically create files identical to the static map files used in MLVFS.

martinhering

@dfort

Thank you for your summary. That was very helpful. I was able to extract all the focus pixels now using the silent module. The pixels for red and blue also now line up.

QuoteA problem with blending surrounding pixels comes up when there are high contrast sharp lines like in this shot from a 100D:

I have the same problem on my EOS M. It comes up when using Apple's Raw Engine and when using the Adobe one. Though I only tested 10 bit mode. Do you have this problem in 14 bit mode as well?
5D Mark III, EOS M, 700D

Teamsleepkid

probably need chroma smoothing for all bit depths. then sharpen.
EOS M

martinhering

Quoteprobably need chroma smoothing for all bit depths. then sharpen.

Could also be that it has something to do with fullframe mode vs crop mode. Does fullframe mode scale the channels in camera? That could also result in pixel components being off slightly. I noticed that crop mode does not result in a stretched image. I'll test that out tomorrow.
5D Mark III, EOS M, 700D

dfort

Full frame skips pixels in a 3x3 pattern for mv1080 and 3x5 in mv720 (default for EOSM) while the crop modes, mv1080crop and zoom, don't skip any pixels so they are outputting a 1x1 pattern but of course a smaller area of the sensor. There's some confusion with the crop_rec module because on the EOSM it uses the mv720 full raw buffer but with the 3x3 pattern of mv1080. Yes, it can get confusing.

bouncyball

Quote from: dfort on March 03, 2017, 05:08:10 PM
That -n switch shows the uncropped raw buffer. Well, almost. It doesn't crop the upper left corner but it does crop the lower right. I'll look into it.
Yes, It needs more investigation at least changes for detecting new crop modes in a better way (adding info to mlv headers etc).

I did it for the reason that in my lately developed (development is not finished yet) version of mlv_dump I used MLVFS pixel correction code which does subtraction of crop(x,y), hence map generated by your script for 700D mlv sample, which I have, was off by that offset. So, I thought that modifying the map generating script itself was more appropriate way to achieve correct result. I did not think about right lower corner though :)

Regards
bb

Danne

In reality this means we dynamically build each pixel map file for each mlv file and apply mlvfs pixel mapping code to every mlv file rather than relying on static map files? Needed because of the +,-1 height crop issue which sometimes occur breaking the static maps?

bouncyball

I don't really understand what do you mean by saying "static" map. AFAIK MLVFS maps are absolute values relative to raw frame buffer x=0 and y=0, no crop/pan offset applied. To alter pixels of visible rectangle only, that offset subtracted by pixel correction code in MLVFS. Daniel's script does the same and that's the reason of -n switch requirement - to subtract offset only once not twice.

martinhering

My focus pixel correction is working:



Thank you for your support.
5D Mark III, EOS M, 700D

bouncyball

@martinhering: really like your software, so sad it's not open source (yet?) :)

Danne

Quote@martinhering: really like your software, so sad it's not open source (yet?) :)
I couldn,t agree more. This community is based around sharing sources. It,s what makes it growing.

dfort

Quote from: bouncyball on March 05, 2017, 11:50:29 AM
I don't really understand what do you mean by saying "static" map. AFAIK MLVFS maps are absolute values relative to raw frame buffer x=0 and y=0, no crop/pan offset applied. To alter pixels of visible rectangle only, that offset subtracted by pixel correction code in MLVFS. Daniel's script does the same and that's the reason of -n switch requirement - to subtract offset only once not twice.

What I mean by "static" map is that it is a stand alone file that maps the location of all the focus pixels of a full raw buffer. When an application uses these "static" files it needs use the crop and image size metadata in order to create a new map file for the MLV file being processed. My bash script creates these map files dynamically so it doesn't wrangle a bunch of data files. The down side to this is that it creates the map files using a formula so the source code is harder to update, especially when new patterns come up like the ones from the crop_rec module.

I like your -n switch because with it you should be able to create a full raw buffer map file from the script. The issue is that it needs to use the size of the full raw buffer. The way you did it the map file starts at the beginning of the full raw buffer but is cut off.

dmilligan wrote an excellent post on how this works. Using his illustration, here is what is happening:

(0,0)
  ____________________________________________
  |                                         |^
  |                                         ||
  |     _______________________________     ||
  |     |\                          |^      ||
  |     | CropPos(x,Y)              ||      ||
  |     |                           ||      ||
  |     |                           |Rec    |Raw Buffer
  |     |                           |Height |Height
  |     |                           ||      ||
  |     |                           ||      ||
  |     |                           |V      ||
  |     |___________________________|_      ||
  |     |<---- Recorded Width  ---->|       ||
  |                                         |V
  |_________________________________________|_
  |<----------Raw Buffer Width------------->|


The "static" maps use the Raw Buffer Width and Height and the apps using these map files crop it down to the Recorded Width and Height. What the -n switch is currently doing is this:

(0,0)
  ___________________________________
  |                                 |        ^
  |                                 |        |
  |                                 |___     |
  |                                 |^       |
  |                                 ||       |
  |                                 ||       |
  |                                 |Rec     Raw Buffer
  |                                 |Height  Height
  |                                 ||       |
  |                                 ||       |
  |                                 |V       |
  |_________________________________|_       |
        |<---- Recorded Width  ---->|        |
                                             V
  
   <----------Raw Buffer Width------------->


From what I've seen from your latest mlv_dump that deals with focus pixels is that you are using external (shall we call them "static" ?) map files, one per MLV file. In most cases that's fine but if you have a shot that uses the "digital dolly" where the image area moves around the raw buffer during the shot it won't work. You should be checking the crop on every frame and if there's a change you need to adjust for that as you are creating the DNG frames.

One idea that I had to integrate these map files into mlv_dump was to use graphic image files of the full raw buffers in the source code and on compile time these are converted into large arrays. That way you don't need to wrangle external data files or use a formula to create the focus pixel pattern. If a new focus pixel pattern comes up, like with the crop_rec module, you can create a new image file for it in Photoshop or Gimp. I'm not sure how difficult this would be to code--I'm still struggling writing a "Hello World!" program in C.

bouncyball

Hey! Thank you for a long and comprehensive post :)

I followed this thread from the mid of 2016 and really appreciate the effort you put into solving of this quite complicated issue.

Quote from: dfort on March 06, 2017, 07:55:27 PM
From what I've seen from your latest mlv_dump that deals with focus pixels is that you are using external (shall we call them "static" ?) map files, one per MLV file. In most cases that's fine but if you have a shot that uses the "digital dolly" where the image area moves around the raw buffer during the shot it won't work. You should be checking the crop on every frame and if there's a change you need to adjust for that as you are creating the DNG frames.
Not exactly.

The mlv_dump version you are referring to is my and Danne's previous experiment with ml-dng branch which is used in MLP right now. Yes, last change I made there was focus map loading. It uses cold pixel interpolation from raw2dng to wash out focus pixels and the map is not "static" but generated by 'fpm.sh' script as usual. Digital dolly shots are indeed broken here.

This time, my latest experiment is based on the MLVFS processing, hence uses "static" maps only and the algorithm is as following:

Code tries to load 'input_mlv_name.fpm', if it does not exist, next step is to load 'camera_id__raw_width__raw_height.fpm' like MLVFS does. If map loaded, code makes sure, whether raw data is dualiso or not (specified by switch), after that, accordingly, horizontal or around pixel interpolation is done, getting edge pixel cases into account (thanks to dmilligan). Digital dolly shots are ok in this case. All pixel coordinates are calculated dynamically for every frame according to panPosX,Y.

Quote from: dfort on March 06, 2017, 07:55:27 PM
One idea that I had to integrate these map files into mlv_dump was to use graphic image files of the full raw buffers in the source code and on compile time these are converted into large arrays. That way you don't need to wrangle external data files or use a formula to create the focus pixel pattern. If a new focus pixel pattern comes up, like with the crop_rec module, you can create a new image file for it in Photoshop or Gimp. I'm not sure how difficult this would be to code--I'm still struggling writing a "Hello World!" program in C.
I really like this idea will think about it.

The best regards
bb