Features copied from another Canon cameraCopying Canon code or functionality may carry legal risk for us. We do respect the Canon company and love their products and we are strict about staying on the right side of the law.
1080p 60fps, 2K, 4K, RAW video...The best we could do was 1080p 35fps on 60D and 600D. Update:
4K works, but has major limitations.
Custom codecsCodecs are not implemented on the general-purpose ARM processor. We can only use what Canon has already included in hardware (H.264, JPEG, LJ92) and fine-tuned their parameters (such as the H.264 bit rate).
The lossless compression used for raw video is the same "codec" Canon uses for CR2. The same processing path (codenamed JPCORE) might be able to handle (M)JPEG. However, we cannot implement additional codecs (such as H.265, JPEG2000 or ProRes). Even if these might be able to run on Canon's image processing hardware, we simply don't know where to start.
Things that can be done in postWhy spending development time on things like in-camera HDR? Magic Lantern is not a replacement for Photoshop

Previewing is OK (e.g. HDR preview, anamorphic preview, fisheye correction preview).
Real-time video processing (e.g. stabilization, sharpness algorithm)We can't program the image processor. These things can only be done if the functionality is already in Canon firmware (i.e. some parameters that can be tweaked - like in the Image Effects menu).
AF microadjustmentNot possible to control AF outside LiveView. Update:
dot_tune works on cameras where AFMA is present in Canon menu.
Not possible on other cameras, with our current knowledge.
Image on both LCD and external monitor at the same timeNot possible (unless proven otherwise by DIGIC investigation).
AF confirmation without chipped adaptersNot possible (camera refused any attempts to fake lens info).
TimecodeVery difficult (see
http://www.magiclantern.wikia.com/wiki/Timecode ). The 5D Mark III has it.
Continuous AF in movie modeVery difficult to do it right (we couldn't).
Scrollwheel controlsIt's not possible to remap them while recording. In standby, ML menu uses a trick: it opens some Canon dialog in background to steal wheel events from it, but this trick doesn't work while recording.
1D supportThese cameras are way outside our reach. Even if we could buy them, very few 1D users would benefit from ML.
There are also
legal concerns regarding Canons Pro line of cameras.
Sure, at some point, some of these might become possible, but chances are extremely small. Spending time on those is effectively searching for the needle in the haystack.
A detailed explanation by dmilligan, on why Magic Lantern cannot increase the FPS of cameras.
Your question really boils down to this:
"Why can't I capture more information, by throwing away information?"
Now from a more practical standpoint:
Compression (what you refer to as "lowering the bitrate") is a difficult, computationally intensive task (it's also impossible). It is not a magical process where you throw some data in and it comes out smaller. The only way to get enough of an effective compression ratio for the incredibly huge size of a video data stream, is to just throw away some of it. The goal here being to throw out the least important information, but we are throwing away information nonetheless. The better an algorithm is at throwing away data (i.e. the better it is at figuring out what data is unimportant), typically the more complex it is. There are very easy ways to throw away data, such as reducing the resolution and line skipping, and there are very hard ways of throwing away data such as DCT
Lets now consider (a very oversimplified) pipeline that a video stream goes through in the camera:
Sensor -> Raw Data -> Image Processing (demosaic, wb, pic style, curves, etc.) -> H.264 Encoder -> Storage
When you talk of "bitrate" you are only talking about the bitrate at the very last step of this pipeline, the bitrate out of the encoder to the storage media. There are many other steps prior to this to consider. If you want a 1080p stream out of the encoder, you also need that 1080p stream to make it's way through the rest of that pipeline (at 60fps). That's where the limitation is, in fact there are probably many, I'll just go over some of the possible ones:
1. The H.264 encoder, can't handle 1080p of video data coming into it at 60 fps (remember it has to do something very complex and computationally intensive with the data and then spit out that result very quickly)
2. The image processing electronics can't handle 1080p of raw data at 60 fps
3. The internal buses that move the raw data from the sensor to the image processors can't handle that much data (1920*1080*14bit*60fps = 1.7 Gigabits per second)
4. The sensor itself isn't fast enough to sample 1080 lines at 60 fps (it takes some finite amount of time to read out each line, and they are read one by one)
I'm not saying that all of those are true, but at least one or more of them are, and that's why 60p mode is a lower resolution. Overcoming any of these obstacles is possible, but it would require more transistors (i.e. faster, more complicated electronics), which would make the camera more expensive. So without more expensive internal electronics, the only way to get enough "compression" to be able to even get our video data to the encoder, is to "compress" the data starting at the sensor itself, and what's the only way to do that? line skipping and reducing the resolution -> basically don't read in as many pixels.