I just don't get why this is dismissed as a "won't fix"
Just try to use older ML versions with CBRe (not CBR). They did exactly what you requested.
To rewrite Canon's compression algorithm (the QScale adjustment part), one has to:
1) Find out when this adjustment should be done (somewhere in this state machine:
http://a1ex.bitbucket.io/ML/states/550D-alt/sm33.htm )
2) Design the algorithm - how to change the QScale? based on what input data? Buffer level is not enough, you also need a mathematical model of what fluctuations you can expect; the H264 encoder operates with constant quality, and to get constant frame size, you have to look at previous frames. It's real-time operation, not a two-pass offline encoding process.
3) How to test? how can you tell it's more robust than Canon's? They already did a huge amount of testing and probably have engineering teams that do just this.
I'm not going to do this. It will be probably thousands of hours of work for a very small improvement that most people won't even notice.
Instead, try to change the parameters exposed by Canon's H.264 implementation. This is happening here:
http://www.magiclantern.fm/forum/index.php?topic=1565.0Closed.