Magic Lantern (RAW) Video format v2.0 (mlv_rec.mo)

Started by g3gg0, July 15, 2013, 10:58:23 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

CoresNZ

Quote from: dadinio13 on August 02, 2013, 06:03:34 AM
https://docs.google.com/file/d/0B-tM9Z6JauKNTjBRS2dIWE5LUDA/edit

MLV_DUMP

BUT you can't actually convert the .RAW converted from your .MLV with raw2dng for now.

Link dosn't work for me, no permission.

Anyone able to make MLV_Dump available please?

dadinio13

Quote from: CoresNZ on August 03, 2013, 01:56:27 PM
Link dosn't work for me, no permission.

Anyone able to make MLV_Dump available please?

Working now.
but i still can't able to convert the out.RAW to .DNG with raw2dng...

g3gg0

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

g3gg0

Quote from: dadinio13 on August 03, 2013, 02:42:45 PM
but i still can't able to convert the out.RAW to .DNG with raw2dng...

are you just not able to do, or is there an error message..?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

dadinio13

raw2dng :
Error : This ain't a lv_rec RAW file.

and raw2dng for osx by scraxx tell me :
image2pipe demuxer @ 0x102021c00] Could not parse framerate: .
pipe:0: Invalid argument


i think this is a framerate detection problem.
i have the latest ml core version.

should i "make clean" mlv_dump?

marekk

g3gg0, is there a possibility to read current white balance value from the camera and write it to a raw file ? It could improve postprocessing a lot. It's not a problem to edit it manually when we've got 10 raw files, but with 100.. it takes a lot of time..   

nandoide

Can anyone try how many seconds is it possible to record at highest resolutions  3k 3.5k at 24fps and 18-20 fps witnout frame skip?
Thanks a lot

kgv5

Its not long, 3,5k 24p is still 1-2sec, 15 gives couple seconds more.

I checked the newest version:
3,5K 3584x1320
23,976 - 1 sec (35 frames) (write speed CF 81-81, SD 17-17,5)
20 fps       - 1,5 sec (39 frames)
18 fps    4 sec (70 frames)
15fps continous OK (write speed CF 101MB/s, SD 17,7MB/s)
www.pilotmovies.pl   5D Mark III, 6D, 550D

g3gg0

Quote from: dadinio13 on August 03, 2013, 04:59:57 PM
raw2dng :
Error : This ain't a lv_rec RAW file.
...
should i "make clean" mlv_dump?

cannot preproduce.
neither in CF-only mode nor in CF+SD file spanning mode.
do a 'gcc mlv_dump.c -o mlv_dump -I../../src' instead of make


Quote from: kgv5 on August 03, 2013, 07:10:43 PM
Its not long, 3,5k 24p is still 1-2sec, 15 gives couple seconds more.

that is how many MiB/s? (CF+SD)
did you use the latest revision?

Quote from: marekk on August 03, 2013, 05:46:51 PM
g3gg0, is there a possibility to read current white balance value from the camera and write it to a raw file ? It could improve postprocessing a lot. It's not a problem to edit it manually when we've got 10 raw files, but with 100.. it takes a lot of time..   

it's planned as many other things.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

dadinio13

Quote from: g3gg0 on August 03, 2013, 07:26:51 PM
cannot preproduce.
neither in CF-only mode nor in CF+SD file spanning mode.
do a 'gcc mlv_dump.c -o mlv_dump -I../../src' instead of make

I just tried to recompile with this command and still the same error in raw2dng.
could you upload your mlv_dump for testing please i don't know what's wrong  ???

kgv5

Quote from: g3gg0 on August 03, 2013, 07:26:51 PM
that is how many MiB/s? (CF+SD)
did you use the latest revision?

The newest version:
3,5K 3584x1320
23,976 - 1 sec (35 frames) (write speed CF 81-81,5 SD 17-17,5)
15fps continous OK            (write speed CF 101MB/s, SD 17,7MB/s)


www.pilotmovies.pl   5D Mark III, 6D, 550D

a1ex

Was this with all hacks enabled? (memory, small hacks and hacked preview - this is what I use for max speed).

kgv5

Quote from: kgv5 on August 02, 2013, 11:00:43 PM
5D3,global draw OFF
CF: komputerbay 1000x 64GB (not the fastest one, benchmarks up to 90-92MB/s), SD: Sandisk Extreme 45MB/s 64GB

Tried couple of times with 1920x1080p 30p
CF: write speeds: 83,5-84,5
SD: 17-18,5
The longest clip I managed to record was over 3000 frames, 1:43. But that was once and I didn't take screenshot. Normally many times i get over 1:10.


Newest build: seem slightly slower, especially SD card
1920x1080 30p - recording 30-40 sec - write speeds CF similar 83-85, SD -slower 15-15,5 (previously 17-18,5)
www.pilotmovies.pl   5D Mark III, 6D, 550D

kgv5

Quote from: a1ex on August 03, 2013, 10:55:29 PM
Was this with all hacks enabled? (memory, small hacks and hacked preview - this is what I use for max speed).

memory hack ON, small hacks ON. Preview tested both canon and Hacked - no significant difference


EDIT: I have done more testing in different resolutions and framerates, SD hardly ever exceeds 15-15,5 so even 3 MB/s slower than the previous build.
Problem with initializing SD card is gone.
www.pilotmovies.pl   5D Mark III, 6D, 550D

chmee

appreciate your work, g3gg0 - if you'd be "ein berliner" you'd get a beer or two :)
[size=2]phreekz * blog * twitter[/size]

g3gg0

in 23.976 fps mode, i get 74 MiB/s + 17.8 MiB/s. several tries. sometimes my CF is slower, but i know it is a faulty card.

here is the complete build for 5D3 that gave me these results.
it is the latest version that also embeds level sensor data (if enabled in ML menu) and white balance info.

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

kgv5

Quote from: g3gg0 on August 04, 2013, 02:51:01 AM
in 23.976 fps mode, i get 74 MiB/s + 17.8 MiB/s. several tries. sometimes my CF is slower, but i know it is a faulty card.

OK, for 3,5K it is right, exceeds 18.
But try 1920x1080 29,97 or even 1920x1280 29,97. Its 15-15,5. Now replace raw_rec.mo for the one from reply #107,with this i am getting 18-18,5 with all the same settings and same SD card. 3,5K also 18-18,5.
www.pilotmovies.pl   5D Mark III, 6D, 550D

g3gg0

Oh wait. Does it say continuous ok?
In this case the cf card is preferred and sd might be slower.
In total the write rate should be the same.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

mucher

i am wondering that it might be faster to create two different buffers for the twoslots or even two buffers to write to the same slot. just a mere thought on the write speed

g3gg0

two buffers wont give us any improvement imho.
to be correct, we already have n buffers, where n is up to 70 or so on 5D3.
and these buffers get assigned dynamically to CF or SD.


i know we can do another improvement.
right now i am playing with two buffering methods on both queueing and dequeueing.

some problem that is making it hard to get optimal results, is the non-censecutiveness of memory buffers.
we can allocate e.g. memory for 70 frames. but this memory is fragmented, it it not continuous.
this means we have some groups of 2, 8, 20 or any other number of frames (we call it slots btw) that are continuous.
but the best for writing speed is to write a consecutive block of nearly 32MiB at once.

buffer layout e.g.: [____] [__] [_____________] [________] [__]

queueing:
method a) pick the first available free buffer
this method walks through the list of slots and checks for a free one.
it does not care if the previous or next buffers are directly behind the free one.

method b) use alex' algorithm that tries to place frames so that we get the largest block with frames in order.
this was necessary for the old writer where the frames had to be in order

better method:
search for the largest free space and try to fill it up.
strategy behind: build the largest block with frames in any order so that we get highest transfer speed

buffer layout e.g.: [____] [__] [______x1234x___] [________] [__]
the fifth frame would get enqueued at x, even if there is a lot of other free space


dequeueing:
method a) there are two writer threads (CF and SD) that are listening on a queue.
dispatcher thread scans the buffers for the largest consecutive block and places a write job into the queue.
the next "free" thread receives this job and writes it, no matter if SD or CF.

method b) again two threads, but every thread has its own queue.
the dispatcher places the largest found block in the queue for CF, the faster device, and the smallest block in the queue for SD.
strategy: try to keep the CF thread busy with the largest blocks to achieve the highest write rate. SD doesnt suffer that much
from writing smaller blocks. (remember, ~32MiB writes give the fastest possible transfer rate. on SD the speed difference isnt that high with smaller blocks)

i think mehtod b) gives the best results.

but to get back to your idea. one possible improvement could be this:
make the dispatcher aware that it should only qeueue writes to SD from large blocks, if there is nothing else.
background: we dont want the slow SD to permanently disturb the queueing by building large blocks.

[____] [12] [3________] [________] [__]

in this case the SD algorithm will pick frame 3.
this could lead to such fragmentation:
[____] [12] [_____789_] [________] [__]

but CF would benefit from writing large blocks so SD would probably be just annoying.

so i am thinking of building a sorted list of buffers and their sizes.
SD will always try to clear the smallest buffer areas, where CF tries to clear the larger ones.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

kgv5

Quote from: g3gg0 on August 04, 2013, 08:18:08 AM
Oh wait. Does it say continuous ok?
In this case the cf card is preferred and sd might be slower.
In total the write rate should be the same.

Nope, continous OK has never show up. With .mo from post #107 i am consistently getting faster SD write by 2-3MB (tested mostly 1920x1080 canons 29,97). Almost always recorded for more than 1 minute.
www.pilotmovies.pl   5D Mark III, 6D, 550D

mucher

Quote from: g3gg0 on August 04, 2013, 11:41:48 AM
two buffers wont give us any improvement imho.
to be correct, we already have n buffers, where n is up to 70 or so on 5D3.
and these buffers get assigned dynamically to CF or SD.


i know we can do another improvement.
right now i am playing with two buffering methods on both queueing and dequeueing.

some problem that is making it hard to get optimal results, is the non-censecutiveness of memory buffers.
we can allocate e.g. memory for 70 frames. but this memory is fragmented, it it not continuous.
this means we have some groups of 2, 8, 20 or any other number of frames (we call it slots btw) that are continuous.
but the best for writing speed is to write a consecutive block of nearly 32MiB at once.

buffer layout e.g.: [____] [__] [_____________] [________] [__]

queueing:
method a) pick the first available free buffer
this method walks through the list of slots and checks for a free one.
it does not care if the previous or next buffers are directly behind the free one.

method b) use alex' algorithm that tries to place frames so that we get the largest block with frames in order.
this was necessary for the old writer where the frames had to be in order

better method:
search for the largest free space and try to fill it up.
strategy behind: build the largest block with frames in any order so that we get highest transfer speed

buffer layout e.g.: [____] [__] [______x1234x___] [________] [__]
the fifth frame would get enqueued at x, even if there is a lot of other free space


dequeueing:
method a) there are two writer threads (CF and SD) that are listening on a queue.
dispatcher thread scans the buffers for the largest consecutive block and places a write job into the queue.
the next "free" thread receives this job and writes it, no matter if SD or CF.

method b) again two threads, but every thread has its own queue.
the dispatcher places the largest found block in the queue for CF, the faster device, and the smallest block in the queue for SD.
strategy: try to keep the CF thread busy with the largest blocks to achieve the highest write rate. SD doesnt suffer that much
from writing smaller blocks. (remember, ~32MiB writes give the fastest possible transfer rate. on SD the speed difference isnt that high with smaller blocks)

i think mehtod b) gives the best results.

but to get back to your idea. one possible improvement could be this:
make the dispatcher aware that it should only qeueue writes to SD from large blocks, if there is nothing else.
background: we dont want the slow SD to permanently disturb the queueing by building large blocks.

[____] [12] [3________] [________] [__]

in this case the SD algorithm will pick frame 3.
this could lead to such fragmentation:
[____] [12] [_____789_] [________] [__]

but CF would benefit from writing large blocks so SD would probably be just annoying.

so i am thinking of building a sorted list of buffers and their sizes.
SD will always try to clear the smallest buffer areas, where CF tries to clear the larger ones.

There should be an arbitration system between CF write thread and SD write thread, I guess, and, possibly, a powerful one.

A sorting list should be a good idea. My wild thought is that it might be good to keep a list of RAW file's address, and let the CF/SD write threads to sort according the list. The question is how much RAW files they can fetch each time by each of them -- then there is a need of arbitration system.

chmee

was there any discussion about handling of splitted files? in the first implementation its an unnecessary offset-wrapping to get the pics.  the better approach should be "while setting resolution and fps there should be calculated, how many pics could be saved into one file, subtracting one or two pics and we're safe. starting the next file.."

will just play with the new files next hours :) regards chmee
[size=2]phreekz * blog * twitter[/size]

a1ex

This requires prior knowledge about whether the filesystem supports large files or not. Current approach does not require such knowledge (the writing algorithm figures it out on the fly).

chmee

@a1ex and what about to fixing it to f.e. 2GB/File? Finally, you dont work with these files, but with the files after the conversion. So, simply speaking, does it matter, if bigger or smaller? (every file could get his own header - its logically the same as the first file with another picture-offset count)

regards chmee
[size=2]phreekz * blog * twitter[/size]