I still use my scripts even though they are slower than molasses in January partially because they are the Devil I know. Ok--enough with the idioms.
I did a little experiment on those specks that are showing up on the @theBilalFakhouri test files and this might be the beginning of a feature that developers may want to add to their MLP processing applications. I was going to hold off publishing it until I could optimize the speed of my scripts a bit more but maybe I should just dump the scripts and see if I can get fpmutil to do the tricks that the scripts can do.
We determined that the specks on the footage were not focus pixels but possibly dead/hot/cold/stuck whatever pixels or maybe just dust on the sensor. What if this happened on a shot that we want to use? Maybe there was a speck on the sensor that ruined an entire day's shoot? What if our camera has a few dead pixels? dcraw has a feature to deal with problem pixels. When I started this project I made dcraw "badpixels" files to map out the focus pixels and it worked quite well.
So how about adding the locations of these pixels to the map file? This can be easily done manually but what if you have a bunch of spots to remove and you're more of a visual person--like me, I studied photography not computer science.
This is what I used to spot my photograhs:

Kidding aside, one of my college classmates was
Russel Brown, a Photoshop pioneer and I've managed to keep up with technology. Enough of the name-dropping and on with the spotting exercise.
Here is one of the spots that the focus pixel map file didn't remove:

This isn't a Photoshop tutorial so I won't go into all of the details but I made a new layer and used the pencil tool set to one pixel to spot out that problem pixel.

After hitting all the problem areas I made the background white. Here's a closeup of one of the spots, notice the x,y coordinates in Photoshop:

If we add these coordinates to the appropriate focus pixel map file in MLVFS it won't work because the map file covers the full raw buffer and the image we used is cropped out of that buffer. It is easy enough to find out how much to offset the x,y coordinates with mlv_dump using the "-v" option. Tip, pipe it into less so you won't have to scroll back through the entire output.
mlv_dump -v 1736x1152.MLV | less
...
Block: RAWI
...
Res: 1736x1152
raw_info:
...
height 1189
width 1808
...
Block: IDNT
...
Camera Name: 'Canon EOS 700D'
...
Camera Model: 0x80000326
...
Block: VIDF
...
Pan: 72x30
...
There's all the information we need. MLVFS will use the map file named "80000326_1808x1189.fpm" and we need to offset our coordinates 72x30. Since I'm more comfortable with Photoshop than basic arithmetic --
Change the canvas size to the full raw buffer size:

and offset the layer we created to "Pan: 72x30":

So that same pixel we just looked at is now at the coordinates based on the full raw buffer:

Fill in the edges with white, save the file. The pbm2fpm.sh script needs a plain text pbm file which surprise, Photoshop doesn't do but if ImageMagick is installed the script can work with pretty much any image file format. Here is a section of the output, note the pixel at 310x290:
667 262
245 263
643 276
310 290
1744 302
1724 314
1706 556
Like I said the script is slower than... well let's just say very slow so I tried fpmutil to convert This is where I had a problem:
./fpmutil fixed_pixels.pbm -o fixed_pixels.fpm
Focus Pixel Map Utility v0.5
****************************
Error: 'fixed_pixels.pbm' header corrupted, map can not be converted
This was a basic (a.k.a. P4 or non-text) Portable Bit Map file exported from Photoshop. Apparently fpmutil requires a specific header in the the pbm file. Speaking of headers, I'm not using any in my fpm files and my scripts strip out any headers. I don't have anything against the use of headers but if the file doesn't have a header we should probably be taking it from the filename like MLVFS and fpm2pbm.sh are doing.
Back to the spotting exercise. Add the list of fixes to the appropriate map file, "80000326_1808x1189.fpm" in this case and save it in MLVFS.
Before:

After:

Sure, I could have done it like this: (x + Pan_X) (y + Pan_Y) = (238 + 72) (260 + 30) = 310 290 and just added that to the map file with the same results but imagine adding a feature on an MLV processing app where you can spot out specks as easily as using Spotone.
Another idea is to add a feature in MLVFS where the problem pixels for your camera can be saved in a map file. Maybe using the camera's serial number in the map file name? That way the "dead" pixels will be fixed automatically, sort of the way it was designed in
dcraw.
Yes I know it exports P4 (binary) PBMs, but if you need I can add P1 (ascii) support (have not seen real reason for adding this before). Are you directly editing this ascii pbms in text editor?
No real need to support Plain PBM. According to
the file specifications:
Plain PBM actually came first, but even its inventor couldn't stand its recklessly squanderous use of resources after a while and switched to what we now know as the regular PBM format.
I'm going to continue to support Plain PBM in my scripts unless I can figure out how to write binary files in bash or learn C enough to modify fpmutil to do what I want. Then again maybe I'll just stick with shell scripts and say I'm a die hard follower of the
Unix Philosophy where all data should be text streams and
worse is better.
Edit: I could add this fpm header support to MLVFS itself... The purpose of this FPM header is the need of knowing exact resolution for the backward conversion to PBM and 'fpmutil' gets this information from my humble header of those fpms 
I like the idea of having a header because it could eliminate duplicate map files. I would suggest having fpmutil fall back on the filename if the FPM header is missing.
@dmilligan and I had a philosophical discussion about having
duplicate fpm files and code to generate
map files on the fly. Maybe it is time to re-think this now that we've got 32 map files, most of them with duplicate data, taking up nearly 30 MB of space.