Dealing with Focus Pixels in raw video

Started by dfort, October 22, 2015, 11:09:10 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

DeafEyeJedi

Simply beautiful and yet @Danne's new groundbreaking MLP rocks my socks!!! [emoji108]

Thanks @dfort for sharing.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

dfort

Maybe focus pixels aren't pixels at all? Getting into the nitty gritty of the image sensor and how a Bayer filter mosaic works it occurred to me that maybe some of the green tiles were used in the "HybridCMOS AF" system used in the cameras that exhibit the focus pixel issue. A quick Google search turned up this interesting discussion that took place a couple of years ago on an astro photography site. Interesting that someone found the focus pixels on the whole sensor.

Here is that image from a Canon 650D sensor:

The discussion is also interesting: Examining the 650D/T4i "Hybrid CMOS AF" pixels

Here's the "Reader's Digest" version of that discussion:

QuoteWhile producing a Master Bias frame in PixInsight I noticed something interesting in the usually unremarkable pixel rejection maps that the ImageIntegration process creates. I seem to have inadvertently found the locations of all the "HybridCMOS AF" focus detection pixels! They showed up in the High Pixel Rejection Map.

Looking closely I think they have done something really smart here that takes advantage of one of the "weaknesses" of bayer matrix colour imaging.

First a full frame view. I have aggressively stretched the image to make the focus points obvious. JPEG compression made things rather blurry but you get the general idea here.

Now if we zoom WAY in (400%) we start to see interesting things.

Notice all the detect areas are hollow. Now why would that be....


Finally we get to what I think is the clever part. All the previous images I had been working with the RAW bayer matrix data. If we now do a de-bayer run on this and convert it to colour we find.... It's all green!

This makes a lot of sense to me. With the RGB bayer matrix what we ACTUALLY have is RGGB. The green channel has 2x the pixels devoted to it as red or blue. So - if you MUST sacrifice any data at all from the sensor elusively using Green as the sacrificial pixel set is extremely logical. Green pixels are the only ones we have "spares" of!

BTW, the de-bayer method blurred these a bit to look like dots, but of course as you can see in the raw bayer matrix data that it is an artifact of debayer not physically true.

It is not at all. What Canon calls "Hybrid CMOS AF" - was introduced on he T4i/650D and is also included on the new T5i/700D. The new SL1/100D has a new variant called Hybrid CMOS AF II that seems (from marketing descriptions) to be the same thing but with focus-sense pixels across 80% of the sensor area instead of only the central area.

In all other Canon DSLRs the autofocus sensors are only found in the reflex viewfinder part of the camera. This means that focus can only be achieved when the mirror is down. The cameras with Live-view can use a software method to do a slower focus metric but it is not as fast/useful as the reflex built in focus point and can only be done between exposures. The idea with these in-sensor focus points is that these cameras can do fast/accurate focus as well as continuously adjust focus while recording video.

(The 650D/700D/100D do also have the traditional AF methods in the reflex section in addition to the Hybrid CMOS AF method)

Actually I was thinking "oh look it is not as bad as we feared it might be!" in that we effectively have a 2x oversample of Green pixels vs 1x red and 1x blue - this means that in the AF areas we drop down to a 1x sample of green. The loss of spacial resolution in those specific areas is still there but given bayer matrix reconstruction blurring of spacial resolution anyway I again think "not all that bad really".

One note on the observation that all focus pixels were green. The focus pixels can show up in just about any color the most common in Magic Lantern raw video seems to be magenta (a.k.a. pink) thus the popular term, "pink dots." Of course green is a primary color so combining the surrounding red and blue in the Bayer pattern the focus pixels change colors.

DeafEyeJedi

Yikes so this whole time it's been RGGB as oppose to RGB especially for the cameras affected by the so called "Focus Pixels" via Hybrid CMOS AF method.

[emoji102]

Seriously I love Google so much and excellent find Daniel on these articles ... Totally worth the read!!!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

dfort

I've been testing Danne's new MLP workflow and the focus pixel map file that I created for the EOSM in 1280x720 crop mode video is working very well. I created the list using some DNG's that clearly showed the focus pixels, Photoshop with the rulers set to pixels, a Google spreadsheet and lots of patience. Basically I ran the DNG through dcraw using the -D or the -d option to make a grayscale image with no interpolation and added the -j and -t 0 options as suggested in the dcraw man page to prevent any stretching or rotation of the image.

dcraw -D -j -t 0 *.dng

This is what a potential focus pixel looks like in Photoshop. I emphasis potential because there are many "hot" pixels that show up but the focus pixels are arranged in a pattern. With the EOSM the focus pixels line up horizontally 24 pixels apart from one another. There are also short rows and long rows and once that was apparent it was easy to setup a spreadsheet to map an entire row at a time.
That particular pixel is at x=15 and y=12 so those coordinates would be entered into a dcraw formatted ".badpixels" file along with an optional date that the pixel was mapped. The text file looks like this:

# EOSM crop mode 1280x720
# Full width pixel rows
#
23 121 0
47 121 0
71 121 0
95 121 0
119 121 0
143 121 0
167 121 0
...


According to the dcraw man page:
QuoteList of your camera's dead pixels, so that dcraw can interpolate around them. Each line specifies the column, row, and UNIX time of death for one pixel. For example:

  962     91 1028350000  # died between August 1 and 4, 2002
1285 1067  0                  # don't know when this pixel died

These coordinates are before any stretching or rotation, so use dcraw -j -t 0 to locate dead pixels.

Of course looking at a text file with over 6,000 coordinates takes some abstract thinking to visualize what the pixel map actually looks like so I wrote this short shell script that turns the pixel map into an image file.

#!/bin/bash

# view_pix_map.sh
#
# Create a Portable Bit Map file from a dcraw ".badpixels"
# or an MLP "dead_pixel_list.txt" formatted file.
#
# Requires Imagemagick, bash and sed
#
# Usage view_pix_map.sh [size] [file]
# example: view_pix_map.sh 1280x720 dead_pixel_list.txt
#
# This reads the file, ignoring any blank lines or comments or lines
# that start with -P (MLP dcraw command), extracts the xy coordinates,
# discards the "UNIX time of death for one pixel" field (required by dcraw)
# and creates a Portable Bit Map graphic file showing the location of the
# mapped focus pixels. This .pbm file can be opened with Photoshop, Gimp, etc.
# it can also be opened with a text editor.
#
# 2016 Daniel A. Fort
# This is free and unencumbered software released into the public domain.

SIZE=$1

convert -size $SIZE -colorspace gray -compress none -depth 8 xc:white $2.pbm

sed -e 's/[[:space:]]*#.*// ; s/[[:space:]]*-P.*// ; /^[[:space:]]*$/d' $2 |
while read x y t; do
  mogrify -colorspace gray -compress none -depth 8 -fill black -draw "point $x,$y" $2.pbm
done


Here is the EOSM focus pixel map file as an image file:

As you can see it looks quite similar to the image posted in Reply #26. In this case it is in crop mode video so it is just the center of the sensor without any line skipping. There are lots of focus pixels packed into that area!

I chose the Portable Bitmap Format because it is a very simple file format that can be read with a text editor. The mapped focus pixels are the 1's.
P1
1280 720
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
...


It should be relatively simple to write a script that can create a focus pixel map that can be used with dcraw from this .pbm file. In addition, the .pbm file can be edited in Photoshop, interactively adding any missed pixels and erasing any excess. The only problem is that Photoshop saves .pbm files in binary format but it can be easily changed back to text format with Imagemagick:
mogrify -colorspace gray -compress none -depth 8 [filename]

So far I've only got the center area of the EOSM sensor mapped. I'm not sure if there is any shifting when the image size is changed in the camera and haven't yet determined how line skipping in non-crop video works so there is still lots to do on this project.

Of course others have already solved the focus pixel issue, most notably AWPStar with his MLVProducer for Windows. He has shared his method which is quite novel because he maps the coordinates from the center of the image. However I'm not looking to write a complete application, just making up some reliable pixel map files that work with dcraw and demystify the dots issue that affects some camera platforms.

DeafEyeJedi

Excellent progress and Indeed it is quite nice that @AWPStar was able to coordinate the map for FP from the center as oppose to the top left corner of the image file.

Because I think that this could simplify the entire post workflow and probably not have to worry about which resolution we shot in since it'll just start mapping out for FP from the center on out ... If that makes sense?

Meaning we can then shoot in either crop-mode or non crop-mode ... whatever suits the situation best for us while shooting on set. At least that's what I'm thinking atm.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

Danne


AWPStar

I don't know why, but sometimes pointing from the center is not working and coordinates should be shifted. And i still don't know how to get right position in all cases.
Probably i should use cropPos and PanPos somehow.

And probably it is (PanPos-CropPos)
or not
MLVProducer. p.s. sorry for my bad english.

dfort

Quote from: AWPStar on January 06, 2016, 03:57:56 AM
I don't know why, but sometimes pointing from the center is not working and coordinates should be shifted. And i still don't know how to get right position in all cases.
Probably i should use cropPos and PanPos somehow.

Don't know what cropPos and PanPos is but I also found out that raw video isn't always centered in relation to the full sensor. Running exiftool on a CR2 file will give lots of useful information:
Sensor Width                    : 5280
Sensor Height                   : 3528
Sensor Left Border              : 84
Sensor Top Border               : 64
Sensor Right Border             : 5267
Sensor Bottom Border            : 3519


Unfortunately this doesn't make it into the MLV metadata and of course RAW has no metadata at all so we're left guessing how to center the pixel map.

Now that I can see what a dcraw ".badpixels" map looks like as an image file I searched around and collected several "dot removal" tools and made image files of their pixel maps. Note that other tools can also remove the focus pixels but some of them use chroma smoothing or other methods that don't require a pixel map.

Let's start with one of the earliest tools, foorgol's PinkDotRemover tool 650D:


I was surprised by how "dirty" the image appears. According to the author he used a script to extract the position of the dots. It looks to me that many of the coordinates are mapping random noise. What seems strange to me is the pattern looks more like what I've seen on a 700D rather than a 650D yet the tool was made to work with 650D, EOSM and "Crop Mode" though I only found one pixel map file in the tool so it must be doing some sort of scaling. The pixel map covers a 1280x720 area.

Next up, maxotics unfinished FocusPixelFixer


Quite a difference. maxotics put a lot of effort into the "pink dot" issue and it appears that he has come up with a more orderly pattern. This is also 1280x720. I'm not sure if it covers both crop and non-crop but it seems that he has done most of his research using EOSM crop mode video so I would assume that this pattern is for crop mode.

The newest program that has dot removal,  MLVProducer for Windows

AWPStar created not one but four different pixel maps. These are for two different focus pixel patterns in crop and non-crop mode. His pixel maps are different than the others because his x y coordinates are zeroed out at the center instead of the upper left corner like the others.

Let's look at his crop mode version, dot32_crop:


This is very similar to the one from the focuspixelfixer pattern but in a slightly larger 1792x1008 size. Note that the images I'm posting have been resized to fit into the forum's guidelines. If you want to examine the full sized images you can find them on my flickr photostream.

The non-cropped version looks quite a bit different, dot32:


Compare this to the crop version number 2 which is different than the other three because this one I calculated to be 1280x720 while the others are 1792x1008. This one is also the closest in appearance to the one that I created for the EOSM crop mode: dot32_crop2


And the non-crop version number 2: dot32_2


Something that is interesting is the number of coordinates that were mapped.


PinkDotRemover tool 650D:
7,747
FocusPixelFixer
10,248
MLVProducer for Windows dot32_crop
14,081
MLVProducer for Windows dot32
25,860
MLVProducer for Windows dot32_crop2
4,696
MLVProducer for Windows dot32_2
12,930

My version for the 1280x720 EOSM crop mode has 6,120 mapped coordinates. So are more mapped coordinates better? I don't think so. I believe that only confirmed focus pixels should be mapped. I didn't use a script or a formulae to determine the position of each focus pixel, though I assumed that once I found a few points they would fall into a pattern. The most clear image that I found to verify my work was the one I posted on Reply #26 of this topic. I superimposed my pixel map with that image in Photoshop and at first things didn't line up very well until I found out that the original was a "FIT" file and those files have the 0,0 x,y coordinates on the lower left corner instead of the upper left corner like most graphic image files. Once I flipped the image things lined up much better. In fact I discovered that a few lines that I assumed were long rows were in fact short rows.


Now things are starting to make sense, at least for the 100D,* 650D, EOSM I should be able to create a full frame pixel map file. I've still got to figure out how line skipping works in non-crop mode. Apparently it uses every third line but my question is, which ones are the third lines?

*[EDIT: Closer examination shows that the SL-1/100D doesn't match the EOSM/650D.]

dmilligan

Each video frame in the MLV file gives you the position of the frame relative to the original raw buffer the frame was taken from (panPosX, panPosY). The raw buffer is always the same for each video mode, when you choose a resolution, you are just selecting thr size of the rectangle from which data is copied.

The position of the crop rectangle is not necessarily always centered (in fact you can move it around even while recording, so it can even vary from frame to frame, and this is why every frame has the panPos).

Therefore what you really need to do is make one map from for each video mode of the entire raw buffer. The easiest way to get the full raw buffer for a particular video mode is to use regular silent picture function. Then when you go to process a frame, shift your full map based on panPos.

(I feel like I already mentioned this is what needed to be done)

@AWPStar
Please consider releasing your source code. You have already been asked, and you beat around the bush offering excuses like the code is not ready and cleaned up enough to be released. This is a terrible reason. No code is ever perfect, and if you continue to use this as an excuse, you will never get to a point where you will release it. So just go ahead and do it. This is an open community and the ML project itself is open. We do the work we and share it freely because we expect others to do likewise.

DeafEyeJedi

Quote from: dmilligan on January 07, 2016, 02:58:10 AM
@AWPStar
Please consider releasing your source code. You have already been asked, and you beat around the bush offering excuses like the code is not ready and cleaned up enough to be released. This is a terrible reason. No code is ever perfect, and if you continue to use this as an excuse, you will never get to a point where you will release it. So just go ahead and do it. This is an open community and the ML project itself is open. We do the work we and share it freely because we expect others to do likewise.

+1 and plus it would take us all into a new world of Magic sooner rather than later!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

AWPStar

MLVProducer. p.s. sorry for my bad english.

dmilligan


DeafEyeJedi

5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109


dfort

My network connection went down last night so I couldn't answer some of the comments right away but the good part was I had fewer interruptions and was able to create a full sized focus pixel map for EOSM, 650D--Yay!


@dmilligan - You did mention back in Reply #7

QuoteI would suggest mapping the focus pixels from a silent DNG taken in each video mode. That way the coordinates are relative to the full, entire raw buffer including OB areas. That way you can use the same map for anything taken in that video mode, and simply adjust it based on the resolution crop and horizontal and vertical offset (which are already recorded in the MLV file)

I wasn't sure I understood it back then but it is starting to become more clear now that you brought up the (panPosX, panPosY). I wasn't sure where to look for it but mlv_dump -m -v [mvl_file] gave me this for a 1280x720 shot:

Block: VIDF
  Offset: 0x62f16260
    Size: 1616896
    Time: 43283.330000 ms
   Frame: #1027
    Crop: 336x184
     Pan: 330x181
   Space: 4064


Now how do you read the Crop and Pan?

I want to add that AWPStar pointed out his SourceForge project page a while back. He even transcoded his binary data files to ASCII so I could examine his pixel map files. All of this took place out in the open on his MLVProducer for Windows topic. AWPStar certainly isn't hiding any secrets!

dfort

Now that we've got these pixel map files that can be viewed and edited in a graphics editing program like Photoshop and Gimp the image file needs to be converted back to a dcraw ".badpixels" text file in order to be useful. This turned out a bit more complicated than the bash script to write the image file but it is working. Note that these scripts take a long time to run but it is much better than doing it manually.

Here's the script that will take a plain text formatted Portable Bit Map (.pbm) file and will use all the black pixels to create a dcraw formatted ".badpixels" file.

#!/bin/bash

# pbm2badpixels.sh
#
# Create a dcraw ".badpixels" formatted file from
# a Portable Bit Map file.
#
# The output file can be used with dcraw and MLP
# to remove the focus pixels from MLV and RAW video
# shot on Canon 100D, 650D, 700D and EOSM cameras.
#
# Requires bash sed and file
#
# Usage pbm2badpixels.sh [file.pbm]
# example: pbm2badpixels.sh EOSM_dead_pixels.pbm
#
# This reads the file, ignoring any blank lines or comments or lines
# that start with P1 (PBM "magic number"), extracts the image size,
# and creates a ".badpixels" formatted text file showing the location of the
# mapped focus pixels. This .txt file can be opened with a text editor
# for further refinement.
#
# Photoshop and perhaps other image editing programs save bitmap files
# in something other than plain text files. Imagemagick can be used to
# change these files into P1 plain text files using:
#
# mogrify -colorspace gray -compress none -depth 8 [filename]
#
# 2016 Daniel A. Fort
# This is free and unencumbered software released into the public domain.

# First check that a file to process was specified by the user.

if [ -z "$1" ]; then
cat << EndOfMessage
pbm2badpixels.sh
Usage: pbm2badpixels.sh [file.pbm]

EndOfMessage
exit
fi

# Next check that the specified file is the correct format for this script.

if [[ $(file -b $1) != "Netpbm PBM image text" ]]; then
echo "ERROR: Wrong Filetype"
exit
fi

# Output file named the same as input with the .txt file extension

output=$1".txt"

# Remove the "magic number" comments and spaces leaving just the raw pbm data
# and load that into an array.

pbm_data=($(sed -e 's/[[:space:]]*#.*// ; s/[[:space:]]*P1.*// ; /^[[:space:]]*$/d' $1 | tr " " "\n"))

# The file starts out with the width and height of the image file.

width=${pbm_data[0]}
height=${pbm_data[1]}

echo '#' $output >> $output

echo '# pbm2badpixels.sh generated' >> $output
echo '# dcraw ".badpixels" format file' >> $output
echo '# image size =' $width"x"$height >> $output
echo >> $output

# Adjust for where the pixel information starts and ends.

first_pixel=2  # first pixel at upper left - 0,0 position
last_pixel=$(($width * $height + 1)) # last pixel at lower right - width,height position

# set the counter to the first pixel

i=$first_pixel

# Loop through the pixels and write out the focus pixel locations

  for (( y=0; y<=(($height -1)); y++ )); do
    for (( x=0; x<=(($width -1)); x++ )); do
    if [[ ${pbm_data[$i]} == 1 ]]; then
        echo -e "$x \t $y \t 0" >> $output
    fi
        i=$[$i+1]
    done
  done


Using both of the scripts that I posted on this topic I was able to take my EOSM 1280x720 crop-mode focus pixel map file, turn it into a Portable Bit Map file, refine it in Photoshop and turn it back into a plain text file that can be used with dcraw (directly or through MLP). My original file had 6,120 focus pixels mapped but once it was cleaned up the amount of mapped pixels went down to 5,972. Not a big difference but I'm pretty sure that this file hits all the focus pixels without hitting any areas that aren't focus pixels. I ran some test footage through it and it looks great.

Of course there's lots to do. Each image size and aspect ratio will need their own file, line skipping needs to be worked out and then there is the issue with being able to pan the image area when in crop mode. Lots to do but at least with these scripts I'll be able to see what is going on instead of just working with lists of numbers.

Danne

This is great work Dan. Could be used to automate the process in MLP?


DeafEyeJedi

Hell Yeah Dan ... Way to go on this yet another achievement of yours!!!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

dfort

Quote from: Danne on January 09, 2016, 08:46:03 AM
This is great work Dan. Could be used to automate the process in MLP?

Thanks. That's what I'm aiming at. I doubt that it will ever by completely automatic especially for the original raw video format because it doesn't have the necessary metadata to find the crop and pan of the image. I also have yet to figure out how to read the crop and pan information in the mlv metadata. Another thing I haven't done yet is to look more closely into video that was shot without crop-mode turned on.

One workflow problem I have is that the focus pixels are removed when the dng files are converted into non-raw image files like tiff, ProRes, etc. The focus pixels remain in the original mlv and the dng files and that would be a problem on a raw workflow using DaVinci Resolve or Adobe Camera Raw.

Of course the ideal solution would be to do it in camera. I mentioned that the CHDK (Canon Hack Development Kit) project was able to do it but I don't have the skills to implement their work in Magic Lantern.


QuoteThe CHDK Manual bad pixel removal tool allows the removal of defective pixels from each image as it is taken. While Canon firmware will automatically fix bad pixels that were found when the camera was manufactured, this CHDK feature will also remove "hot" or "defective" pixels which are not known to the Canon firmware (e.g. pixels that became defective during the camera lifetime). This feature affects both the JPG image and RAW image.
QuoteThis is not on a menu, but is a feature enabled by putting a special "badpixel" file into the /CHDK/ folder on your SD card.
Line structure of this file:
x1,y1
x2,y2
and so on
Here {xn,yn} are the coordinates of bad pixels in RAW file format
QuoteOnce you have generated a file with  the list of  all the "bad" pixels for your camera, CHDK can remove them automatically with the [Average] or [RAWConv] option selected.  CHDK looks for the files badpixel and badpixel.txt in the /CHDK folder; this is a plain text file with coordinates of the bad pixels in the raw image, with one x,y pair per line. If both files are present, pixels listed in each file will be patched. Only the first 8kb of each file will be used.

[Off] with this setting no Bad pixel removal processing takes place.
[Average] with this setting CHDK calculates the color for the bad pixel based on its four neighbor pixels with a simple average calculation and then interpolates - bad neighbor pixels will be ignored in this calculation.
[RawConv] setting means - intended for use with post processing raw converter software to remove the bad pixels later in the workflow. With this setting CHDK just sets the bad pixel to the value 0 (zero), without any other calculation or modification. Most RAW-capable apps. will detect this and apply their own algorithms. This option is ignored in DNG mode (in DNG mode bad pixels are always averaged by CHDK).


Quote from senior developer ewavr - 'You can compare bad pixel removal quality in both modes, IMO, 'RAWConv' mode is preferred, because CHDK interpolation is very unsophisticated'.

Note: With DNG 1.1 format enabled, bad pixels identified by badpixel.bin are always removed - (interpolated / averaged) by CHDK. This does not affect the 'Bad pixel removal' option, which also fixes user specified pixels.

There is also a topic on the CHDK forum discussing this but there hasn't been any activity on it since 2009 :-X
http://chdk.setepontos.com/index.php?topic=3098

DeafEyeJedi

Dude I knew it .... I remember this discussion re: CHDK and felt that this will one day be somehow implemented into our toys by ML and this is starting to become more of a reality than some fantasy at least it does to me.

I'm with RAWConv mode all the way baby!!!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

dmilligan

Quote from: dfort on January 09, 2016, 06:53:30 PM
Of course the ideal solution would be to do it in camera. I mentioned that the CHDK (Canon Hack Development Kit) project was able to do it but I don't have the skills to implement their work in Magic Lantern.
Fixing a couple hundred bad pixels on one still frame and fixing several thousand, 24/30 times per second, are two very different tasks. It seems unlikely to me that the ARM CPU could handle it. There might be some hardware processing somewhere that does it, but that would require extensive reverse engineering to find.

dfort

Quote from: dmilligan on January 10, 2016, 04:01:02 AM
Fixing a couple hundred bad pixels on one still frame and fixing several thousand, 24/30 times per second, are two very different tasks. It seems unlikely to me that the ARM CPU could handle it. There might be some hardware processing somewhere that does it, but that would require extensive reverse engineering to find.

More on the order of around 6,000 pixels. Yeah, I only mention the CHDK project because it seems they found a way to access the "badpixel" removal system in some of the point and shoot cameras. There must be something already in the Canon firmware that deals with it because the CR2, JPEG and H.264 files don't show the focus pixels. They don't show up on the Live View screen either. I also hooked up an external monitor via HDMI and didn't see any dots. They only show up on the raw or mlv files.

In any case, I've heard the mantra--if it can be easily done in post there's no point doing it in the camera. Of course "easily done" is a rather subjective statement. I'm looking at these lines of code in mlv_dump trying to figure out how to work with crop and pan:

    /* restore VIDF header */
    mlv_vidf_hdr_t *hdr = slots[capture_slot].ptr;
    mlv_set_type((mlv_hdr_t *)hdr, "VIDF");
    mlv_set_timestamp((mlv_hdr_t *)hdr, mlv_start_timestamp);
   
    hdr->blockSize = slots[capture_slot].blockSize;
    hdr->frameSpace = slots[capture_slot].frameSpace;
    /* frame number in file is off by one. nobody needs to know we skipped the first frame */
    hdr->frameNumber = frame_count - 1;
    hdr->cropPosX = (skip_x + 7) & ~7;
    hdr->cropPosY = (skip_y + 7) & ~7;
    hdr->panPosX = skip_x;
    hdr->panPosY = skip_y;
   
    void* ptr = (void*)((int32_t)hdr + sizeof(mlv_vidf_hdr_t) + hdr->frameSpace);
    void* fullSizeBuffer = fullsize_buffers[(fullsize_buffer_pos+1) % 2];


:(

dfort

Lots of interesting things are going on in other parts of the Magic Lantern project but I'm still looking at dots and feeling confused. I hope the ML community doesn't mind me posting these lab notes and please jump in if feel you have something to contribute.

It is interesting having access to the 5D3 and checking out the differences between it and the ML menus on the EOSM. Kind of polar opposite cameras but also similar in many ways.

So on most cameras including the 5D3 the way you get into Movie crop mode is by using the magnifying glass button but the EOSM, 650D, 700D has a CROP_MODE_HACK feature that will do a digital zoom that apparently is a standard feature on the 600D. This "hack" will do a digital zoom for movie mode but not for stills. In 1920x1080 H.264 it is a 3x crop but in mlv and raw it depends on the image size selected. The EOSM maxes out at 1792 horizontal resolution in crop mode and that pretty much matches the 5x crop that you get with the magnifying glass zoom in. Now here's the interesting part, you can put the EOSM into crop mode video using the magnifying glass button on the touch screen. In fact you can go even further and do a 10x crop with the magnifying glass method and you can record video in that setting. Now will it actually give you a 10x digital zoom? Nope, it is recorded in 5x crop.

I also shot silent picture stills. I tried zooming in panning around the image shooting stills in both full resolution and "Simple" settings. In full resolution although it seemed like I was shooting stills in crop mode, the mlv wasn't cropped. With the "Simple" setting the panned shots were garbled up. In any case, shooting digital zoom stills doesn't seem to be a supported workflow so I'll skip that for now.

If the CROP_MODE_HACK centers the image then things will be much easier but according to dmilligan and AWPStar the crop and pan information is in the mlv file so I decided to take a good look at it.

As I was doing these tests I found that there are limits and then there are limits. What I mean by this is that you can only shoot a maximum of 1792 wide by 1026 high, it just won't go beyond that. Well, you might say that's a pretty good resolution but if you take it to the limit you can only record a second or so of video. Basically whatever fits in the buffer because it can't write to the SD card fast enough. I tried both mlv with and without sound and raw and the practical limit seems to be 1280x720 for crop mode video. If you really want to push it you can get a few seconds at 1792x1008.

While on the subject of width and height, there are lots of aspect ratios to choose from ranging from 5:1 (ridiculously wide) to 1:2 (cell phone vertical like) but at the extremes only the smaller image sizes are practical because you quickly hit the width or height resolution limits. Play around with the settings and you'll see what I mean.

Let's look at some test results. The goal is to come up with a master focus pixel map file and adjust it according to the crop and pan information that is embedded in every mlv frame. That means that it should be possible to deal with the focus pixels even if the crop area is panned in the middle of a shot.

First, lets take a look at the various images sizes available for the 16x9 aspect ratio:

640x360
    Crop: 656x368
     Pan: 650x361

960x540
    Crop: 496x272
     Pan: 490x271

1280x720
    Crop: 336x184
     Pan: 330x181

1600x900
    Crop: 176x96
     Pan: 170x91

1792x1008
    Crop: 80x40
     Pan: 74x37


The crop and pan should be in relationship with the full buffer. I'm not sure if I have been successful in determining the size of the "entire raw buffer including OB areas" but I did shoot full resolution silent mlv and dng and came up with some interesting information.

DNG - Photoshop = 5208x3477

DNG - exiftool:
Image Width                     : 5280
Image Height                    : 3529
Default Crop Origin             : 0 0
Default Crop Size               : 5208 3477
Active Area                     : 52 72 3529 5280

mlv_dump:
    Crop: 0x0
     Pan: 0x0


I also shot still silent in "Simple" mode and came up with a very different resolution:
DNG - Photoshop = 1734x693

DNG - exiftool:
Image Width                     : 1808
Image Height                    : 727
Default Crop Origin             : 0 0
Default Crop Size               : 1734 693
Active Area                     : 28 74 721 1808

mlv_dump:
    Crop: 0x0
     Pan: 0x0


Another test I ran was to pan all around to see how the pan numbers change but to my surprise so did the crop information. This is using the 1280x720 image size starting on the center then going clockwise from the top:

5x Zoom crop center
    Crop: 592x336
     Pan: 585x335

5x Zoom crop top
    Crop: 592x376
     Pan: 585x375

5x Zoom crop top right
    Crop: 968x376
     Pan: 961x375

5x Zoom crop right
    Crop: 968x336
     Pan: 961x335

5x Zoom crop bottom right
    Crop: 968x296
     Pan: 961x293

5x Zoom crop bottom
    Crop: 592x296
     Pan: 585x293

5x Zoom crop bottom left
    Crop: 216x296
     Pan: 209x293

5x Zoom crop left
    Crop: 216x336
     Pan: 209x335

5x Zoom crop top left
    Crop: 216x376
     Pan: 209x375


There is a pattern emerging but I'm still trying to figure out how to use this information.

According to @dmilligan:

QuoteI would suggest mapping the focus pixels from a silent DNG taken in each video mode. That way the coordinates are relative to the full, entire raw buffer including OB areas. That way you can use the same map for anything taken in that video mode, and simply adjust it based on the resolution crop and horizontal and vertical offset (which are already recorded in the MLV file)

In the process of running today's tests I found out that a full resolution silent picture doesn't show the focus pixels. A "Simple" silent picture does but it skips lines and so many of the focus pixels in crop mode video are missing because crop mode video doesn't skip any lines.

One final test--a CR2 file has even more metadata than MLV so maybe that shows more accurate information about the sensor size and possibly the raw buffer?

CR2

Photoshop = 5184x3456

exiftool

Image Width                     : 5184
Image Height                    : 3456
Canon Image Width               : 5184
Canon Image Height              : 3456
AF Image Width                  : 5184
AF Image Height                 : 3456
AF Area Widths                  : 516 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
AF Area Heights                 : 688 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
AF Area X Positions             : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
AF Area Y Positions             : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Aspect Ratio                    : 3:2
Cropped Image Width             : 5184
Cropped Image Height            : 3456
Cropped Image Left              : 0
Cropped Image Top               : 0
Sensor Width                    : 5280
Sensor Height                   : 3528
Sensor Left Border              : 84
Sensor Top Border               : 64
Sensor Right Border             : 5267
Sensor Bottom Border            : 3519


That's about enough information for today. I did shoot tests using all of the possible mlv aspect ratios and resolutions. Then of course there is raw which has even more resolution choices. MLV has 10 possible resolution choices for each aspect ratio while raw has 21 choices. That's 18 aspect ratio choices x 21 resolution choices = 378 per camera model. Making a focus pixel map for every possible aspect ratio, resolution and camera combination would be madness!

dfort

One more observation from today's test.

The mlv that was shot at 1280x720 using the magnifying glass crop mode method yielded a different crop and pan result from the crop mode video hack method. Perhaps the raw buffer is a different size between these two methods?

CROP_MODE_HACK 1280x720
    Crop: 336x184
     Pan: 330x181

5x Zoom crop center 1280x720
    Crop: 592x336
     Pan: 585x335