4k Filming

Started by krashnik, June 15, 2013, 06:04:17 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

g3gg0

how hard is it to understand what this thread is meant for?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

JamieRollsMedia650D

Quote from: g3gg0 on June 24, 2013, 12:56:41 AM
how hard is it to understand what this thread is meant for?

well its not hard at all. i just need to know if its possible to get it working if so i will invest and start working.


Michael Zöller

Quote from: g3gg0 on June 15, 2013, 06:17:27 PM
there is no programming needed. first it needs reverse engineering.

understand how to use ADKIZ, TWOADD, HIV, DEFM, SHAD and the other modules, then understand JP62
and finally set up the communication between those modules using EDMAC.

> http://magiclantern.wikia.com/wiki/Register_Map
neoluxx.de
EOS 5D Mark II | EOS 600D | EF 24-70mm f/2.8 | Tascam DR-40

Chucho

Quote from: g3gg0 on June 15, 2013, 06:17:27 PM
there is no programming needed. first it needs reverse engineering.

understand how to use ADKIZ, TWOADD, HIV, DEFM, SHAD and the other modules, then understand JP62
and finally set up the communication between those modules using EDMAC.

> http://magiclantern.wikia.com/wiki/Register_Map

I still don't understand how to use EMAC, but what about the Projection functions.
"FA_SetProjectionMode"
"FA_SetChannelNum"
"FA_SetProjectionWindow"
"FA_SetHProjectionRange"
"FA_ProjectionTestImage"
the "FA_SetProjectionWindow" function takes two arguments that look like image resolutions to me.

g3gg0

analyze them and tell what they do ;)
sounds like displaying an image on screen?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

1%

I think they are for the YUV zoom window but not sure. Maybe useful to get big YUV and record that.

grooveminister

Hi all,

I´m new here, but after my first very positive contact to ML and RAW2GPCF>CineFormRAW workflow I also have a question:

I was totally stunned about ML saving the complete bayer-pattern portion (@full width + height according to the set aspect ratio) to CF.
When I set RAW2GPCF to convert to CineFormRAW (and not -422 or -444) I can choose in FirstLight how I want the Image to be debayered (AdvancedDetail 1-3...).

My though is: Magic Latern can sample the full sensor width and is NOT limited to crop mode.
Do we need any changes in ML at all to develop 4K files (or streams) in the future?
I guess the issue is more on the post side???


From what I´ve experienced with CineForm, I think if CineFormRAW is capable of decoding to NLE in realtime with debayering and scaling to e.g. 1920x1080 included - why shouldn´t software be capable of scaling to 4K in the future?
CineFormRAW is already handling the full bayer-pattern that ML has saved to CF. I think the downscaling part is not the most complex one - OK, all the developement functions in CineForm (WB, Primary Corrections, 3D LUT) might be done on the downsampled representation and would cost far more CPU @ 4K - BUT:

Basically I think it should be possible to develop our current MLraw-files in the future to 4K.
Because MLraw provides enough information to debayer and downsample to 4K already.

At least on the 5D3 (5760 x 3840) it should be no problem to scale to 4K after debayering as RAW software for stills (DPP, LightRoom, ACR) is extracting full 5760x3840 pics out of the bayer-pattern of the sensor.

Or am I missing something?

Best wishes,
Andreas

1%

We're missing the camera making a 4K of anything.

grooveminister

I was talking about Magic Lantern RAW and that it would be perfectly possible to develop the already extisting current FullFrame MLraws to 4K.
I don´t think that any Canon camera has enough internal computing power to properly downsample and encode 4K compressed-streams of any format be it H.264 or anything else...

Current MLraw captured in FullFrame mode provide all the sensor data to develop to 4K.
So it´s up to the developers of PostPro-tools to just support 4K output from MLraw.

noisyboy

Quote from: grooveminister on June 24, 2013, 07:21:08 PM
I was talking about Magic Lantern RAW and that it would be perfectly possible to develop the already extisting current FullFrame MLraws to 4K.
I don´t think that any Canon camera has enough internal computing power to properly downsample and encode 4K compressed-streams of any format be it H.264 or anything else...

Current MLraw captured in FullFrame mode provide all the sensor data to develop to 4K.
So it´s up to the developers of PostPro-tools to just support 4K output from MLraw.

The raw image captured isn't of the full sensor dude. There is line skipping/pixel binning that occurs to get the image you have on your card in the same way that the h.264 data is captured. Each frame isn't like what you would get with a still image if you see what I mean? Sorry if I misunderstand you :)

1%

Actually in zoom mode there is less or no line skips.

But you can already do this in ACR.. the 600D people are all doing it.

noisyboy

Quote from: 1% on June 24, 2013, 08:29:34 PM
Actually in zoom mode there is less or no line skips.

But you can already do this in ACR.. the 600D people are all doing it.

My bad - zoom mode slipped my tired mind  8)

So do you think there is room to explore the defined area of the zoom modes? ie change it from 5x or 10x to something else?

Also, what exactly are the 600D peeps doing sorry?

1%

Uprezzing to 1080P from like 540.

Do the raw type test... we need to pick one.. hopefully one is acceptable.

grooveminister

Quote from: noisyboy on June 24, 2013, 08:25:29 PM
The raw image captured isn't of the full sensor dude. There is line skipping/pixel binning that occurs to get the image you have on your card in the same way that the h.264 data is captured. Each frame isn't like what you would get with a still image if you see what I mean? Sorry if I misunderstand you :)
I was one of the very first to have a 5D2 and suffered from the line skipping.
But on the 5D mark III it should work - not line skipping or pixel binning AFAIK.

eatstoomuchjam

Quote from: grooveminister on June 25, 2013, 09:51:47 AM
I was one of the very first to have a 5D2 and suffered from the line skipping.
But on the 5D mark III it should work - not line skipping or pixel binning AFAIK.

He already explained it to you. 
In full-frame mode, the maximum resolution is 1920x1280.  In windowed (5x/crop) mode, it's 3584x1320, but it's unlikely that recording the maximum resolution will ever be possible because the card writer in the camera is limited to 160-170MB/s and that size requires a bit more than that (190 MB/s).

Also, if it were somehow possible to dump the entire resolution of the sensor in the 5DIII, it'd be 5760x3240.  That comes to 261,273,600 bits per frame (32,659,200 bytes).  At 24 frames per second, that's 783,820,800 bytes per second.  For your theory that ML is writing every single pixel on the sensor out to the card, ML would be writing about 750MB/s.

Read the rest of the posts on this forum and you will understand better.  Right now, you're making suggestions with no understanding of what you're talking about.  Even worse, you're making suggestions that make no sense.

grooveminister

Oooops, Sorry.
I totally understand now. Thanks for explaining - I didn´t do my math...
So ML is taking the full readout of the sensor (?) and applies some pixel binning to reduce the data?
Or are lines/pixels skipped on the 5D3?

Thanks again for clarifying and now I´m sorry for the confusion I didn´t want to create!

coutts

Quote from: grooveminister on June 25, 2013, 09:08:55 PM
Oooops, Sorry.
I totally understand now. Thanks for explaining - I didn´t do my math...
So ML is taking the full readout of the sensor (?) and applies some pixel binning to reduce the data?
Or are lines/pixels skipped on the 5D3?

Thanks again for clarifying and now I´m sorry for the confusion I didn´t want to create!


canon does all of the pixel binning / line skipping and outputs an image to a buffer, updated at the framerate of live view (24fps, for example). all we do is save the image at the buffer 24 times a second to the card to make the video. the image is 14-bit raw data.

mkrjf

Can developers provide overview of how sensor is sampled?
It seems there are two modes - line skip (also row skip?) and center cut / 3x zoom.
For the line skip mode - the output to cf card is every other bayer 'cluster' vertically and horizontally? taken from center of sensor?

http://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/lecture-notes/MITRES_6_007S11_lec17.pdf
look at figures 17.2 and 17.4. Same as line skip used in ML? and then just put blank lines and rows back (resample to 4K) and then interpolate.
Convolution with sinc function (on graphics card?) should produce 'ideal interpolation' for band limited input (which this is - Canon has optical low pass filter for sensor). Should restore image to original with some softening.

Anyway just an idea. Requires knowing exact grid subsampling performed by ML code.

vroem

I'm not a dev but here is how Canon skips lines:

It keeps one out of every 3 pixels in every direction.
Some zoom levels have other skipping techniques, but they are even worse.

Heh time for an avatar :-)

krashnik

Quote from: eatstoomuchjam on June 25, 2013, 06:19:11 PM
  In windowed (5x/crop) mode, it's 3584x1320, but it's unlikely that recording the maximum resolution will ever be possible because the card writer in the camera is limited to 160-170MB/s and that size requires a bit more than that (190 MB/s).

Is it confirmed that the card writer in cameras is limited to ~170mb/s?  If so, that is the main bottleneck we face & surpassing the limit would mean a hardware hack to bypass this card reader straight into a diffferent storage system, such as SSD.

freakygeez

Not a programmer in any shape or form but...

I'm wondering if we drop the bitrate to web video levels say 12 or 25mbs and with today's 4k capable SD cards that should give the camera (600D in this case) just enough room to do 2K+ HD or 2.5K, SD, at a push?

The obvious trouble with that is two fold:

1. What's the point if the information isn't there, save for downsizing
3. No audio, would use too much memory and use a proper recorder anyway, lol :)


With these low bit rates your probably not going to get 2 shots from 1 wide but at least the footage will look significantly sharper once scaled down in post, if slightly more artifacty when compared to the full 40mbs, I would also recommend choosing a picture profile to burn in your look as anything below 50mbs is un-gradable.

a lot of pro cameras do use 25mbs (Sony FS700 without 4k upgrade), it's a usable data rate but you don't grade it as it falls apart almost immediately which is why this rate is used a ton for ENG type cameras.

Anyone?

dmilligan

In the following post, I talk about why 1080p versus 720p is not possible at 60FPS, but the same exact reasoning applies to 2K or 4K or xK versus 1080p at 30FPS, (just replace where I say '1080p' with '4K' and '60fps' with '30fps' in the following post:)

http://www.magiclantern.fm/forum/index.php?topic=11570.msg113776#msg113776

Danialdaneshmand

No expert here , but due to users wanting 4K on their DSLR's it may be that the new 5D Mark IV have this feature.
If porting ML to it was possible is there a way to Reverse-Engineer the feature?

menoc

Quote from: Danialdaneshmand on December 30, 2014, 07:19:52 AM
No expert here , but due to users wanting 4K on their DSLR's it may be that the new 5D Mark IV have this feature.
If porting ML to it was possible is there a way to Reverse-Engineer the feature?

The real problem is the Hardware. You need faster hardware to do 4K. Trying to implement 4K with hardware that was not designed for it will cause many problems - such as heating (which has many side effects on a clean image). My guess is that the the 5D Mark IV will have a faster processor and will use CFast - a faster storage solution.  There's not much you could do unless you hack the camera, install a faster hardware and make it work with the firmware. That would be a big no, no! (you will be voiding your warranty and not to mention that you'd also be on the wrong side of the law!)

We can't go past the hardware limit. You have to program for the hardware and not the other way around.

maxEmpty

i was just wondering where the h.264 encoder is located inside the camera, is it on sensor or is it seperate. if it is seperate from the sensor then you can peltier cool it like people would do in astrophotography
that can allow us to cool the sensor or the encoder to -10C or a lot lower ( lowest i've heard is -94C BRRR!! ) in any case i think the biggest problem would be the clock of the h.264 encoder which i can imagine is constant and meant to work at a few levels 1080:max 30fps, 720:max 60fps, 480(WGVA):max 30fps... etc.
if at first you cant get L glass... Sell everything you own and get L glass.