I've tested the 20-bit version with a few infrared files, but these two are even stronger.
The result is a little noisier than I've expected. In channel mixer, I found this ratio to be close to the sweet spot, noise-wise: R=1, G=0.6, B=0.1 (at auto WB 1, 6.5, 20).
What crop window did you use for the test sample? I can't identify it in the CR2's...
edit: after playing a bit with your sample, here are some tips:
- your files have a very high SNR in the red channel, but quite poor in green and blue channels; unlike regular pictures, where the green channel is cleaner
- therefore, the first tip would be to disable chroma smoothing (--no-cs)
- related: internally I use a temporary demosaic to get a better interpolation in half-res areas; however, this demosaicing will take some noise from the green/blue channels and put it into the red channel, so let's disable this step (--mean23, which averages only pixels of the same color)
- in your flower sample, you have out-of-focus shadows, so we don't need to recover full-res detail there (--no-alias-map)
- in your flower sample, the resolution is not critical either, so let's also try --no-fullres
- in some cases, stripe correction may be less than ideal, so let's try disabling it (--no-stripe-fix)
Results (click for full-res):

Script used to render these crops (you may find useful for trying out things):
compare-options.pyKalman analysis (optimal averaging formula, see
http://robocup.mi.fu-berlin.de/buch/kalman.pdf ):
- let's assume constant additive noise, in arbitrary units normalized to get stdev=1 for the red channel
- green channel will have stdev=6.5 per pixel (from auto WB)
- blue channel will have stdev=20 (from auto WB)
- now, let's mix one red, two greens and one blue (as with dcraw -h) => green channel has stdev=6.5/sqrt(2) = 4.6 per RGGB cell
- variances: 1, 21, 400
- optimal blending factors (after auto wb, without color matrix): 1, 0.048, 0.0025.
Now, look at tip #2 from above: debayering will usually do some kind of channel mixing (depending of the algorithm), which will introduce noise. Let's minimize this. Under the additive additive noise hypothesis (same noise stdev for all pixels), this means the noise for each pixel should be mixed in equal proportions, so, let's do the debayer at UniWB (multipliers 1,1,1). This is not exactly a good idea for color images, but should be close to optimal noise-wise. In this case, the optimal blending factors would be 1, 0.3, 0.05, since we no longer have WB applied.

Proof that the difference is caused by debayering: develop with --shrink=2 (half-res without debayer), both WB settings, and the noise difference will vanish (
auto-wb vs
uniwb).
Exercise to the reader: show (mathematically) that if you use AHD debayering performed at WB 1,6.5,20, and color matrix applied, the optimal blending factors for the channel mixers are close to 1,0.6,0.1 (my initial guess).