Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - a1ex

Pages: [1] 2 3 ... 357
Raw Video Postprocessing / Re: Strong artifacts
« on: Yesterday at 10:35:11 PM »
Unable to answer without sitting down to debug it, but from what I remember since I've used AMaZE in cr2hdr, it seemed to prefer white balanced data as input (so it could know what's grayscale and handle aliasing better in these areas). At least that was my understanding (or maybe my implementation is also broken). I don't really understand how it works (tried to ask somebody who seemed to know), only figured out how to call it.

Very easy:
Code: [Select]
# similar to %-n.png and %-p.png, but using frames i+2, i-2, i+3 and i-3
%-n2.png: %.ppm $$(call inc,%,2).ppm
$(FLOW) $^ [email protected]

%-p2.png: %.ppm $$(call inc,%,-2).ppm
$(FLOW) $^ [email protected]

%-n3.png: %.ppm $$(call inc,%,3).ppm
$(FLOW) $^ [email protected]

%-p3.png: %.ppm $$(call inc,%,-3).ppm
$(FLOW) $^ [email protected]

# average current frame with 3 warped frames before and 3 after
%-a33.png: %.ppm %-n.png %-p.png %-n2.png %-p2.png %-n3.png %-p3.png
convert -average $^ [email protected]

=> make whatever_001234-a33.png (or jpg, since there is a rule that converts from png to jpg)

convert foobar.tif foobar.ppm

I had some trouble saving 16-bit tif from OpenCV, but 16-bit PNG appears to work fine (convert it to tif afterwards). For 8-bit output, just change the extensions in the Makefile; for 16-bit output, it's something like: cv2.imwrite(out, im2.astype('uint16')); might also require scaling.

The default output format is 8-bit PNG, btw (and the default input is 8-bit PPM because that's what dcraw outputs by default). I've used JPEG just for uploading smaller files.

Also noticed LMMSE already handles color artifacts pretty well, so I've rendered the frames with RawTherapee and applied the same script. This is not fully automated - after batch processing with RT or any other external program, convert the images to ppm, delete the dcraw rule from the Makefile and run the remaining stuff. Result:

Raw Video Postprocessing / Re: Strong artifacts
« on: Yesterday at 04:42:14 PM »
Thanks; using the previous and next frame gives some minor improvement, but nothing worth showing.

However, I'm not getting your artifacts when rendering the image normally (dcraw -a -b 2), and I'm not even using an alias-friendly demosaic algorithm.

Before (original, uncorrected): M18-1635_000003-dcraw.jpg
Corrected with 1 frame before/after: M18-1635_000003-dcraw-a11.jpg
Corrected with 3 frames before/after: M18-1635_000003-dcraw-a33.jpg

edit: also tried dcraw-float; interpolation type 6 (LMMSE) does a good job reducing color artifacts without any additional trickery; I'd expect the same performance for any GUI raw processor that uses LMMSE (e.g. RawTherapee).

Same example with RawTherapee, LMMSE:

Before (original, uncorrected): M18-1635_000003-RT-LMMSE.jpg
Corrected with 1 frame before/after: M18-1635_000003-RT-LMMSE-a11.jpg
Corrected with 3 frames before/after: M18-1635_000003-RT-LMMSE-a33.jpg

When you install opencv with brew, it will tell you to run some commands if you want to develop with it (or something along these lines). Once you do that, it should be able to find cv2 (aka OpenCV). At least that's how it worked here.

I've used a modified version of this for vertical pattern noise removal (used more neighbouring frames, reduced the weight of frames without camera motion and only kept the vertical offsets from the correction). Regular noise reduction also works (advice: limit the correction to +/- some small quantity, e.g. 5 units on a 0-255 scale).

For more fun stuff, look at (watch the video on the front page).

Just got another sample file for testing; result:

dcraw -a -b 2:
Before (original, uncorrected): M18-1635_000003-dcraw.jpg
Corrected with 1 frame before/after: M18-1635_000003-dcraw-a11.jpg
Corrected with 3 frames before/after: M18-1635_000003-dcraw-a33.jpg

Same example with RawTherapee, LMMSE:
Before (original, uncorrected): M18-1635_000003-RT-LMMSE.jpg
Corrected with 1 frame before/after: M18-1635_000003-RT-LMMSE-a11.jpg
Corrected with 3 frames before/after: M18-1635_000003-RT-LMMSE-a33.jpg

copying code doesn't work well from Safari

Maybe you can report a bug to Apple after you get it working :D

edit: Google Chrome has the same issue, Firefox works.

edit: wget also has the issue, what the...

You may need to install pip first. There are two ways to do this on Mac (or maybe more):

- sudo easy_install pip (then, sudo pip install Cython)
- without sudo: pip2 after installing homebrew and hg. After running the commands suggested by homebrew: pip2 install Cython; python build_ext -i

You'll also need PIL (sudo pip install Pillow / pip2 install Pillow), OpenCV (brew install opencv3), dcraw and ImageMagick (brew install dcraw imagemagick). Also requires some fiddling (e.g. copying code doesn't work well from Safari, python modules don't work out of the box - read brew's messages - and so on).

Tested on a Mac VM after installing this. The hardest part is getting used to the Mac UI (e.g. how do you open the Home directory in the file browser? how do you select all the files from a zip in order to drag them?!)

Tip: only experiment with one frame at a time (it's not very fast).

Raw Video Postprocessing / Re: Strong artifacts
« on: November 19, 2017, 08:13:36 PM »
Looks pretty strong; do you mind uploading a short MLV (1 second is more than enough) for this experiment ?

Do you mean you want to try some super resolution algorithms ?

First attempt seems promising:

Just playing with this dataset and

Do you mean you want to try some super resolution algorithms ?
I have about 45 frames of this castle, before I start panning to the right...
Uploading frame 0 to 45 right now, takes about half an hour.
Same link as before.

- before: dcraw M27-1337-frame_000002.dng
- after: averaged with frames 1 and 3, warped with optical flow to match frame 2

To reproduce the above result, get the files below, install the dependencies (follow comments and error messages), then type:
Code: [Select]
make -j2 M27-1337_frame_000002-a.jpg

or "make -j8" to render the entire sequence on a quad-core processor.

Makefile (use Firefox for copying the text; Google Chrome and Safari will not work!)
Code: [Select]
# experiment: attempt to reduce aliasing on hand-held footage using optical flow
# requires

# replace with path to pyflow repository
FLOW=python ~/src/pyflow/

# default target: render all frames as jpg
all: $(patsubst %.dng,%-a.jpg,$(wildcard M27-1337_frame_*.dng))

# render DNGs with dcraw
%.ppm: %.dng
dcraw $<

# helper to specify dependencies on previous or next image
# assumes the file name pattern is: prefix_000123 (underscore followed by 6 digits)
# fixme: easier way to... increment a number in Makefile?!
inc = $(shell stem="$1"; echo $${stem%_*}_$$(printf "%06d" $$((10\#$${stem//*_/}+$2))) )

# enable secondary expansion (needed below)

# next or previous frames
%-n.png: %.ppm $$(call inc,%,1).ppm
$(FLOW) $^ [email protected]

%-p.png: %.ppm $$(call inc,%,-1).ppm
$(FLOW) $^ [email protected]

# average
%-a.png: %.ppm %-n.png %-p.png
convert -average $^ [email protected]

# fallback rules: first / last file will only have "next" / "previous" neighbours
# FIXME: these rules may be chosen incorrectly instead of the above in some edge cases; if in doubt, delete them and see if it helps
%-a.png: %.ppm %-n.png
convert -average $^ [email protected]

%-a.png: %.ppm %-p.png
convert -average $^ [email protected]

# convert to jpg
%.jpg: %.ppm
convert $< [email protected]
%.jpg: %.png
convert $< [email protected]

# 100% crops
%-crop.jpg: %.jpg
convert $< -crop 400x300+900+650 [email protected]

# careful if you have other files in this directory ;)
rm -f *.ppm *.jpg *.png *.npy

# do not delete intermediate files

# example:
# make -j8
# make -j2 M27-1337_frame_000002-a.jpg
Code: [Select]
# Modified the demo from
# -> just save the warped image and the computed flow; filenames from command line

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# from __future__ import unicode_literals
import numpy as np
from PIL import Image
import time
import pyflow
import sys

    print("%s %s -> %s" % (sys.argv[1], sys.argv[2], sys.argv[3]))
    print("usage: %s input1.jpg input2.jpg output.npy" % sys.argv[0])
    raise SystemExit

im1 = np.array([1]))
im2 = np.array([2]))
im1 = im1.astype(float) / 255.
im2 = im2.astype(float) / 255.

# Flow Options:
alpha = 0.012
ratio = 0.75
minWidth = 20
nOuterFPIterations = 7
nInnerFPIterations = 1
nSORIterations = 30
colType = 0  # 0 or default:RGB, 1:GRAY (but pass gray image with shape (h,w,1))

s = time.time()
u, v, im2W = pyflow.coarse2fine_flow(
    im1, im2, alpha, ratio, minWidth, nOuterFPIterations, nInnerFPIterations,
    nSORIterations, colType)
e = time.time()
print('Time Taken: %.2f seconds for image of size (%d, %d, %d)' % (
    e - s, im1.shape[0], im1.shape[1], im1.shape[2]))

flow = np.concatenate((u[..., None], v[..., None]), axis=2)[3] + ".npy", flow)

import cv2
cv2.imwrite(sys.argv[3], im2W[:, :, ::-1] * 255)

Exercise for the reader: use more frames to compute the correction.

Have fun.

Got some half-decent results with this:

For each frame:
- selected a few nearby frames (choice not critical, e.g. i-3 ... i+3 is fine, but some difficult frames will need more)
- computed a warped version from the nearby frames, using their demo script (only changed the file names)
- averaged the current frame and its warped versions => a temporally denoised frame
- from the difference image (denoised - original), kept only the vertical pattern (column medians) and discarded the rest

Seems to work as long as the camera is moved horizontally (otherwise it's hard to tell the difference between pattern noise and vertical objects in the image). When the camera motion stops, the pattern reappears (need to pick different frames).

Can you upload a few frames before and after your test image, graded in the same way? 15 JPEGs before and 15 after should be fine, just to run a quick test and have something to compare with. I've got this, but it's hard to judge when they don't have the same grading.

Hm, the first frame from the MLV is dark - how did you get that?!

H.264 proxy getting slightly out of sync?

edit: no improvements using this dark frame...

Camera-specific discussion / Re: Canon 5D Mark III / 5D3 / Firmware 1.3.4
« on: November 18, 2017, 03:13:39 PM »
Right - this time, the focus distance figures look sane. For infinity, Canon firmware returns 65535 cm; maybe this can be adjusted in Lua somehow, but that applies to all models, not just 5D3 1.3.4.

There may be (didn't test) a similar bug in another widely used feature; it can be noticed with grep.

Camera-specific discussion / Re: Canon 7D Mark I
« on: November 18, 2017, 03:06:22 PM »
Not sure what might be causing the issue, but mlv_rec's file I/O backend is pretty different from mlv_lite's; maybe g3gg0 has some ideas.

There is work in progress for making mlv_lite compatible with mlv_snd.

The image is underexposed by more than 3 stops.

Based on, my advice would be to use a slightly higher exposure time (maybe 1/33 - that one shouldn't cause trouble with artificial lights, just like 1/50) and increase ISO to 3200 or even 6400. On 5D3, the benefits of increasing ISO are much higher in regular 1080p or 720p video modes, compared to photo mode or 1:1 crop, because of... pixel binning.

1080p/720p: by increasing ISO from 3200 to 6400, you lose 1 stop of highlights to gain about 0.5 stops in shadows. Above ISO 6400, there's nothing more to gain.

1:1 crop / photo mode: by increasing ISO from 1600 to 6400, you lose 2 stops of highlights to gain about 0.5 stops (0.36+0.24) in shadows. You'll probably think twice before increasing ISO above 1600 or 3200.

Code: [Select]
# compute shadow improvement when increasing ISO (100->200, 200->400, ..., 6400->12800)
# in other words: how cleaner your shadows will be after increasing ISO by 1 stop (thus clipping 1 stop of highlights)
# measurements done on 5D3,
#         isos    =   [ 100    200    400    800    1600   3200   6400  12800 ];
octave:1> dr_1080 =   [ 11.10  11.05  11.02  10.93  10.73  10.39  9.88  8.88  ];
octave:2> dr_crop =   [ 11.22  11.13  11.00  10.82  10.32   9.68  8.92  7.70  ];
octave:3> 1 + diff(dr_1080)
                           0.950  0.970  0.910  0.800  0.660  0.490  0.00
octave:4> 1 + diff(dr_crop)
                           0.910  0.870  0.820  0.500  0.360  0.240 -0.22
# note: output aligned manually for easier reading

In your case: 1/50 -> 1/33 means more photons captured (0.6 EV more), and ISO 800 -> 6400 will improve the shadows by about 2 stops (0.8+0.66+0.5), while clipping 3 stops of highlights. In your test image, clipping 3 + 0.6 EV of highlights would affect a grand total of... 72 pixels (0.0035% of the image area).

Code: [Select]
dcraw -4 -E 1Z7A2572_000422.dng
octave:1> a = imread('1Z7A2572_000422.pgm');
octave:2> prctile(a(:), [50 90 99 99.9 99.99 99.999 100])'
ans =
   2094.0   2184.0   2306.0   2402.0   3037.0   3335.5   3493.0

# overexposure amount (using default white level from MLV metadata)
octave:3> log2(16200-2048) - log2(3493-2048)
ans =  3.2919

# how many pixels would be clipped after increasing the exposure by 3.6 stops?
octave:4> 2 ^ (log2(16200-2048) - 3.6) + 2048
ans =  3215.1
octave:5> sum(a(:) > 3215)
ans =  72

The algorithm used for fixed pattern noise has a major weakness: it gets easily confused by horizontal or vertical lines in the actual image. I have some ideas to rework the algorithm, by looking at past and future frames and applying some sort of optical flow, but it's not easy.

A dark frame may improve things, but my previous attempt in a similar case was unsuccessful (because the pattern is not exactly repeatable with a dark frame - instead, my understanding is that Canon code fine-tunes it continuously, with a time constant of a few seconds - see e.g. the vertical artifacts in centered x10 zoom with crop_rec_4k builds).

I'd like to take a closer look at this pattern change, but cannot promise a timely fix. Do you mind uploading a sample MLV?

Camera-specific discussion / Re: Canon 7D Mark I
« on: November 17, 2017, 09:11:38 PM »
To narrow down, you can run a couple of tests: with a card formatted in the camera, record as much as you can, with:
- mlv_rec with sound (your current configuration that fails)
- mlv_rec without sound
- mlv_lite
- raw_rec (with some older version), only if the above did not work
- custom code that writes something to a file over and over (minimal example), only if the above did not work

Also, how does the crash look like? (e.g. can you film the camera screen during the crash?)

Feature Requests / Re: Mirror Lock-up
« on: November 16, 2017, 08:02:26 PM »

It's not enabled on recent models; uncomment FEATURE_MLU_HANDHELD in features.h, try and report. I did not find it effective on 5D3 and had better luck with its built-in silent mode (but it's been years since I've tried).

Camera-specific discussion / Re: Canon 6D
« on: November 15, 2017, 09:29:09 PM »
That information is enough to solve the problem, if you actually read those two pages from the DNG spec and put in practice (in exiftool's command line) what you have read.

What we know about ResLock is documented at and

The others are unknown; they can be found by understanding other image processing paths and cross-checking the numbers (that's how the known ones were found). It's what Canon code uses for CR2, and they are not the same in all cameras. Why there are differences - complete mystery.

Short answer: I have no idea where the black level difference comes from. It's likely from encoder configuration, but where exactly... no idea. It's not from our raw backend, for sure. I also doubt it's from these resources (if these are wrong, it either doesn't work, or locks up, or locks out other stuff from LiveView, or locks out unused stuff with no obvious side effects).

However, the defect is easy to work around it - hence that exiftool puzzle. Per-channel black level in DNG is a trick I know from this forum (others described it some years ago).

The defect on 650D is even harder to understand, but that one is likely from resolution-related registers (maybe the ones I'm overriding, maybe others). Other models were just lucky (they happened to work without much tinkering).

Camera-specific discussion / Re: Canon 6D
« on: November 15, 2017, 07:42:38 PM »
All 4 color channels have a different offset and I can only change 2 values in BlackLevelRepeatDim.

Of course, you have a 2x2 Bayer pattern, not a 4D matrix.

Hint: you should read two pages from the DNG spec, not just one :D

Camera-specific discussion / Re: Canon 750D
« on: November 14, 2017, 11:34:47 PM »
Yes, the dry-shell console is definitely useful; unfortunately it only works in QEMU on DIGIC 4 and 5. On D6, it only recognizes the first character, then it locks up...

To debug this: -d io,uart,int

I'm also interested in checking MPU serial console logs from older cameras (in particular, DIGIC 4, where the MPU architecture is known and documented, unlike newer models where it's just a black box), but it's a low-priority task for me (more like a curiosity).

Try and report. The powersave change is recent, so back in 2016, half-shutter was needed to reset the timer (which is hidden somewhere in Canon code, and regular button "presses" from software did not touch it).

I did not test it against the 30-minute LiveView timeout; only against the 1-minute auto power off.

Camera-specific discussion / Re: Canon 6D
« on: November 14, 2017, 10:27:11 PM »
The initial hypothesis (when I wrote that test) was that the (newly found at that time) raw buffer should match the LiveView buffer, including its corners. Later, it turned out not to be true, so that test is no longer exact (but if you ignore the "roundoff" errors, it's still useful).

For 1x, look in raw_set_geometry - I believe there is an offset of 14 Bayer pixels skipped by Canon code when creating the preview. The test should be updated to account for this offset (e.g. raw_info.active_area.x1 would become raw_info.active_area.x1 + 14), after confirming that offset is correct for other cameras (I've only tested 5D3, and keep in mind my pixel peeping skills aren't the best).

For 5x, the raw buffer contains a lot more than what's displayed (guess why you can record 2...3K in this mode), so that test will display huge errors. In this case, the test would be more difficult to write (somebody has to sit down and do the math); easiest way - it should probably not be interpreted in this mode.

A better check would be to look at raw zebras - are they aligned with Canon's preview? (in all modes)

Outside LiveView, they are computed for every displayed pixel (on the BMP overlay). There, you should see pixel-perfect alignment with the analyzed image.

However, the raw zebras are quite low-res in LiveView - computed every 8 BMP pixels horizontally, so they won't align very well because of this. You could change the code in draw_zebras_raw_lv to operate at byte level (uint8_t instead of uint64_t, plus other adjustments) - it will be slower, but better for checking alignment. There is also zebra_highlight_raw_advanced (used by Auto ETTR when selecting Show metered areas). That one is computed for every BMP pixel, so it might be a better way to check the alignment between raw and LV buffers.

If you like easy coding tasks: what about adding some sort of alignment test in raw_diag? You could reuse RAW_ZEBRA_TEST or the existing zebra drawing functions from zebra.c, and display things in a way suitable for judging alignment (maybe a checkerboard pattern displaying either Canon or ML rendering, alternating between them to make any misalignment obvious).

Regarding your earlier issue - have you tried typing the error message, or the tag you are trying to change, in a search engine?
( advice from )

Code: [Select]
exiftool "Warning: Not enough values specified (2 required)"
per channel black level


Raw Video / Re: Problem with playing back whole raw card
« on: November 14, 2017, 08:59:41 PM »
If you use raw2dng to convert recovered2.raw, comment out fix_vertical_stripes(). It analyzes the first frame, which is not valid, so the correction code ends up actually introducing the defect.

Better yet - just delete the extra byte. Its location is 97*3629056 + 534*1920*14/8 - 110*14/8 = 0x1516c000 (first magic number is frame size as printed by raw2dng, second is the number of good lines from frame 97 - count the pixels - and third is number of bad columns on the right after fixing an integer number of lines, again found by counting the pixels).

Easiest way: open recovered.raw in a hex editor, navigate to offset 0x1516c000 and delete one byte (it will be a B7 in this particular file). Also change the number of frames in the footer to something very high (e.g. replace 97 00 with 97 FF) Save, run raw2dng (unmodified) and all the 505 frames will be correct.

What's puzzling me: the offset is multiple of 16384, but not a multiple of line size (1920*14/8) and ends up in the middle of the frame. Even though there's an extra byte introduced, the total file size is correct (integer number of frames + footer). What exactly happened is a mystery to me.

For other files, you'll need to count the pixels and do the math, as the magic numbers will be different.

BTW - have you found what ML version was used?

Camera-specific discussion / Re: Canon 5D Mark III / 5D3 / Firmware 1.3.4
« on: November 14, 2017, 09:16:15 AM »
How does this look?

Obviously buggy :D

(the obvious part is on the UI, but the presence of the bug can be found in these logs)

Not sure how to confirm DIALOG_MnCardFormatExecute

It's usually spelled Excute in Canon code. It's probably correct, as it works on QEMU and it wouldn't work any more after changing it to some incorrect value (either not restoring ML, or locking up).

Pages: [1] 2 3 ... 357