11501
Feature Requests / Re: 1080p crop mode on 60d
« on: June 25, 2012, 08:48:12 AM »
No (or at least not in the near future).
Etiquette, expectations, entitlement...
@autoexec_bin | #magiclantern | Discord | Reddit | Server issues
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
hg clone https://bitbucket.org/hudson/magic-lantern
cd magic-lantern
hg up qemu -C
cd contrib/qemu
./install.sh
hg up unified -C
cd platform/550D.109
make clean && make zip
make install
make install_qemu
Your question really boils down to this:
"Why can't I capture more information, by throwing away information?"
Now from a more practical standpoint:
Compression (what you refer to as "lowering the bitrate") is a difficult, computationally intensive task (it's also impossible). It is not a magical process where you throw some data in and it comes out smaller. The only way to get enough of an effective compression ratio for the incredibly huge size of a video data stream, is to just throw away some of it. The goal here being to throw out the least important information, but we are throwing away information nonetheless. The better an algorithm is at throwing away data (i.e. the better it is at figuring out what data is unimportant), typically the more complex it is. There are very easy ways to throw away data, such as reducing the resolution and line skipping, and there are very hard ways of throwing away data such as DCT
Lets now consider (a very oversimplified) pipeline that a video stream goes through in the camera:
Sensor -> Raw Data -> Image Processing (demosaic, wb, pic style, curves, etc.) -> H.264 Encoder -> Storage
When you talk of "bitrate" you are only talking about the bitrate at the very last step of this pipeline, the bitrate out of the encoder to the storage media. There are many other steps prior to this to consider. If you want a 1080p stream out of the encoder, you also need that 1080p stream to make it's way through the rest of that pipeline (at 60fps). That's where the limitation is, in fact there are probably many, I'll just go over some of the possible ones:
1. The H.264 encoder, can't handle 1080p of video data coming into it at 60 fps (remember it has to do something very complex and computationally intensive with the data and then spit out that result very quickly)
2. The image processing electronics can't handle 1080p of raw data at 60 fps
3. The internal buses that move the raw data from the sensor to the image processors can't handle that much data (1920*1080*14bit*60fps = 1.7 Gigabits per second)
4. The sensor itself isn't fast enough to sample 1080 lines at 60 fps (it takes some finite amount of time to read out each line, and they are read one by one)
I'm not saying that all of those are true, but at least one or more of them are, and that's why 60p mode is a lower resolution. Overcoming any of these obstacles is possible, but it would require more transistors (i.e. faster, more complicated electronics), which would make the camera more expensive. So without more expensive internal electronics, the only way to get enough "compression" to be able to even get our video data to the encoder, is to "compress" the data starting at the sensor itself, and what's the only way to do that? line skipping and reducing the resolution -> basically don't read in as many pixels.