It should work just fine; even exposure jumps of 1-2 stops should not cause problems.
However, you need to be careful to avoid overexposure (maybe configure ETTR to underexpose a little more than usual).
Also, keep in mind that FRSP exposures are actually
gradients, and changing exposures will also change the gradient ramp. The entire sensor starts capturing, waits for some delay, then the capture is stopped line by line, as the image is read out, causing a difference in exposure times on top and bottom lines. That difference remains constant.
I don't know how much this would impact a deflicker algorithm yet...
I can imagine a way to capture metadata that will give exact exposure time for each line (so one can undo the gradient), although it would be tedious and model-dependent. It would require logging the video timer registers while Canon code is performing the capture; during a long exposure, the sensor is reconfigured several times. Then, interpreting this data would probably give accurate exposure time for every single line from the image (I hope).
Or, the above things can be approximated from capture time (currently saved as metadata). This capture time includes:
- a dark frame (every time you take a still picture, Canon code takes a dark frame before, although it's not at full resolution; just a few hundred lines)
- delays from inter-task communications (semaphores, message queues); repeatability here is a few tens of milliseconds
- the actual capture time, until the last line is read out
- extra overheads (sensor setup, DMA, whatever else Canon code does)
All this extra stuff can probably be modeled as an overhead, to be subtracted from capture time (which is currently saved as metadata).
Not sure all of this is actually worth the trouble...