not to sure about "connection #0" thou -- this maybe a wild goose chase
Connections are probably hardcoded (or configured in a way we don't currently understand) to various image processing modules. So far, all the D4 and D5 models get the LiveView RAW data from connection 0. Connection 6 and 7 are pass-through (whatever you transfer there using a Read EDMAC channel will be copied on the other side, to a Write EDMAC channel configured for the same connection).
See this diagram (from
http://www.magiclantern.fm/forum/index.php?topic=18315.msg188630#msg188630 )

A read EDMAC channel will read the data from RAM and send it to some image processing module.
A write EDMAC channel will get the data from some image processing module and will write it into RAM.
The data can be read via some input module (such as DSUNPACK, ADUNPACK, UNPACK24, or others - possibly unnamed), where you can configure the input bit depth. In this case, the input stream can be 10-bit, 12-bit, 14-bit or 16-bit, configured using DSUNPACK_MODE / ADUNPACK_MODE / UNPACK24_MODE /
0xC0F371FC / etc. In this case, the image processing module that actually does the work (e.g. JPCORE) probably receives normalized data.
A similar process happens on the output side, where a PACK module is used (PACK16, PACK32). Remember the
PACK32_ISEL and PACK32_MODE:
- PACK32_ISEL probably means "wire the input of PACK32 module to
whatever other image processing module that outputs Bayer data, in various places in the pipeline";
- PACK32_MODE configures the output bit depth of whatever image data arrives to the PACK32 module (10/12/14/16).
Currently, the uncompressed bit depth selection is done in raw_lv_request_bpp (raw.c, crop_rec_4k branch).
Experiments on the above can be made on existing code that's known to work (raw_twk for digic 4/5, EekoAddRawPath for digic 5), or on FA_MaxSelectTestImage / FA_SubtractTestImage (low-hanging fruit for understanding the image processing modules).
One interesting note from the crop_rec_4k thread, where I've implemented the 10/12-bit lossless compression
by darkening the input raw data (so the input and output would still be 14-bit, but there will be fewer levels actually used by the image - as many as a 12-bit or a 10-bit image). For lossless compression, the entropy of a 10/12-bit stream is similar (maybe identical?) to the entropy of a 14-bit stream with each value shifted by 4 or 2 bytes (integer division by 16 or by 4).
How does that work?
raw.c:raw_lv_request_digital_gain:
- lv_raw_gain is written to SHAD_GAIN_REGISTER
- RAW_TYPE_REGISTER is set to
0x12 (DEFCORRE)- this image happens to be scaled by digital ISO gain and is not affected by bad pixels.
When the digital gain is not set, RAW_TYPE_REGISTER is set to CCD (probably the first stage where the raw data gets in the digital domain).
For a better understanding, set RAW_TYPE_REGISTER to DEFCORRE (0x4 on digic 4) without overriding SHAD_GAIN, and notice what happens at ISO 320 vs 400. Repeat for RAW_TYPE_REGISTER set to CCD. Then start overriding SHAD_GAIN with any values you want, even something like this:
if (get_halfshutter_pressed())
{
EngDrvOut(SHAD_GAIN_REGISTER, rand());
}
Now look at the problems that appeared from this change (the 10/12-bit lossless implementation):
- First of all, the "slurp" EDMAC channel (the one that writes the raw data into memory) had to be configured for exact resolution; the autodetection from raw_lv_get_resolution (0xC0F0680x/0xC0F0608x) is not exact - it's often off by one on the vertical direction, although the exact reason is unknown. Being off by one on 5D3 resulted in the raw data being correct only on every other frame (although I don't really understand why that happens). The issue was only with RAW_TYPE_REGISTER set to DEFCORRE, but it all worked fine when it was set to CCD.
- Next, take a look at
this bug: in a video mode with increased vertical resolution, the darkening works only on the top side, on an area equal to default Canon resolution in that video mode. That means, we have to
reconfigure some more registers - probably in the image processing pipeline, all the way from CCD (sic) to DEFCORRE. Which ones? I don't know - couldn't find them in adtg_gui. I hope to find them by emulating the LiveView in QEMU, but that's going to be a really long journey.
That's why, for now, the 10/12-bit lossless compression only works in video modes with unmodified resolution (plain 1080p, plain 720p and 5x zoom).