Author Topic: MLV App 1.11 - All in one MLV Video Post Processing App [Windows, Mac and Linux]  (Read 438112 times)

masc

  • Contributor
  • Hero Member
  • *****
  • Posts: 1616
Happens the same, if you remove the first 200 from the project? Is it slow then also for the first mlvs in the session? Or also starting from ~200th mlv? I'll try to reproduce this evening.
5D3.113 | EOSM.202

adrjork

  • Member
  • ***
  • Posts: 175
Happens the same, if you remove the first 200 from the project?
Yes.
Is it slow then also for the first mlvs in the session? Or also starting from ~200th mlv? I'll try to reproduce this evening.
Only from ~200th mlv, both in single App running and in parallel Apps scenarios: if I have 2 Apps running in parallel, and – let say – I have 200 mlvs in the first App, only the second App slows down (almost still).
I'll try to reproduce this evening.
Many many thanks, masc!

P.S. I tried to create MAPPs loading packs of 200 mlvs. This trick helped for the first 400/450 mlvs, after that – even quitting and restarting the App – the following packs of 200 mlvs slowed down a lot. Anyway, while doing the MAPPs, I noticed that CPUs work non-continuously: i.e. Activity Monitor says that at one moment the CPUs for MLVApp work at 200%, but the following moment the CPUs for MLVApp are at 4% (like if they have intervals of still).

masc

  • Contributor
  • Hero Member
  • *****
  • Posts: 1616
The time for parsing the information of a clip relates to the clip length, and the speed of your disk. CPU usage will partially be very low for that. After MAPP files are created, it should always be very fast.
Running apps in parallel is only recommended if you have many cores which are far away from beeing used 100% (all cores in sum, MLVApp+FFMPEG) (if used below 50% it makes sense). Otherwise you will only slow down the process because the apps fight against each other.

Tried an import of 352 files and had a half eye on it preparing the export. I could not recognize any difference between the first and the last clips. On my 2010 MBP (OSX 10.9.5) + external USB2.0 HDD each clip took around 1-2sec (without MAPP). MAPP creation also took around 1-2sec per clip (clip lengths 5-20sec, FHD).

Edit: tried also on a newer system (OSX 10.13 + USB3.0): more or less the same behaviour (only a very little bit faster).
5D3.113 | EOSM.202

70MM13

  • Senior
  • ****
  • Posts: 435
i know that there are voices in opposition to using card spanning, but for those of us who love using it, would you consider adding support to mlvapp so we can read directly from the two cards and not have to copy the files into a single directory?  this may not seem like much of an issue, but it really gets in the way of production!

please and thank you!

ilia3101

  • Moderators
  • Hero Member
  • *****
  • Posts: 923
That would be a helpful feature... shouldn't be too difficult to implement -  Just check the DCIM/*EOS*/ folder on every other external disk for any .M00 files with the same name

@masc Does Qt offer a cross platform way of doing this?

masc

  • Contributor
  • Hero Member
  • *****
  • Posts: 1616
I did not find a feature like this in Qt.
The code for spanning file search should be in the C part. So I would think to create a string variable for a "spanning file location" or something, which can be filled by the user from the edit dock (individually for each clip) or from a menu (global for all clips). What's the use case? Global?
If the existing code doesn't find a spanning file, it could search at this configured 2nd place.
But even here: does this work if we have 2 times the "same" drive, one with the MLV and one with the m00? How does the system remember which drive is which, if both have the same drive name (Unix)?
5D3.113 | EOSM.202

ilia3101

  • Moderators
  • Hero Member
  • *****
  • Posts: 923
Actually maybe you're right, could be done in C part!

I didn't even realise. Will see what I can do later.

Danne

  • Contributor
  • Hero Member
  • *****
  • Posts: 6698
@70MM13
Are you using the spanning feature with sd_uhs module on?

masc

  • Contributor
  • Hero Member
  • *****
  • Posts: 1616
@Ilia: see static FILE **load_all_chunks(char *base_filename, int *entries) in video_mlv.c
5D3.113 | EOSM.202

70MM13

  • Senior
  • ****
  • Posts: 435
@danne,
yes, i am!
it is wonderful to record at high resolutions for long durations!  it works beautifully here.

see my candlelight music video for a great example.  one continuous take at 3072 14 bits lossless.

Danne

  • Contributor
  • Hero Member
  • *****
  • Posts: 6698
Eould you ssy it's reliable? Your sd card is still holding up?

70MM13

  • Senior
  • ****
  • Posts: 435
i am very pleased with it, and the only drawback i have experienced is the one i am asking for help with here...  i get lazy sometimes, especially with the zen episodes where i am more concerned with what i'm talking about versus visuals, and i wind up shooting at 1920 3*3 because it is so convenient.  but there's no question, using the card spanning and overclock is wonderful and rock solid.

i will be filming a new episode today/tomorrow using it.  due to my cellphone internet, i will be uploading it to youtube downsampled to 1080p, but it still looks much better than shooting at that resolution!

i'm using a sandisk extreme pro 170 MB/s sd card and KB 1066x CF

no problems with the cards after many sessions.  i'll share the link to the upcoming episode in the videos section once it's up...

it would be quite a treat to convert the mlvs to cdng straight from the 2 cards in mlvapp ;)

edit: i did just remember one "issue" i have with the overclock:  early on, i found that sometimes there were some issues with recording failures (immediate) if starting the camera with overclock ON.  it is now muscle memory for me to turn it off before shutting down the camera.  i don't know if it is camera-centric, but it may be a good idea to automagically turn it off in code at shutdown, or have it always OFF by default at startup (if it ever becomes an official function - which i vote for!)

adrjork

  • Member
  • ***
  • Posts: 175
Running apps in parallel is only recommended if you have many cores which are far away from beeing used 100%
I'm using an hackintosh with an i7-extreme with 10-physical cores. I've tested multiple parallel Apps running and I've noticed the following:
1 App: T(ime);
4 Apps: T/2 (roughly);
8 Apps: T/2.66;
10 Apps: around T/2.88.
In a scenario with 33 TB to be converted, even the little difference of 2.88 over 2.66 is welcome (because means less hours).
The time for parsing the information of a clip relates to the clip length, and the speed of your disk.
I'd say that could be the real bottleneck, since all my mlvs are stored into an 8-HDDs 48TB RAID6 thunderb.2, that is only “relatively” fast, but surely not fast as even a single SSD. But in your test you are using a USB2 HDD that should be pretty slower than my RAID, so the mistery of my slowness remains unsolved...
On my 2010 MBP (OSX 10.9.5) + external USB2.0 HDD each clip took around 1-2sec (without MAPP). MAPP creation also took around 1-2sec per clip (clip lengths 5-20sec, FHD).
Perhaps the answer to the mistery is the clips' duration: my clips (around 3400 for this last single project) are from a minimum of 1 minute to a max. of 40 minutes per-clip.
I've noticed that if I drag up to 220 “unknown” clips into MLV-App it can take 1'45" to start seeing the clips added into the Session panel. Then, opening a single clip can take 10 seconds (with MAPP it's only slightly faster). Creating a MAPP can take from 1" to 20" (for every single clip) but creating MAPPs seems to be the ONLY solution for me: in fact, without MAPPs, I confirm that I can upload up to a max of 220 clips (also if distributed into various parallel instances of MLV-App running together), i.e. I “could” upload more clips, but once the conversion starts the App recaps all the clips before the conversion, then from the 221th clip the time of recap increases monstrously (the App “seems” crashed). Instead, WITH the MAPPs, I can upload many more clips without “recap stoppage” (of course, the recap time of 1-10 seconds per clip remains, but at least there isn't a stoppage anymore).

Luther

  • Senior
  • ****
  • Posts: 317
@adrjork you have a shitton of data. MLVApp is not really suited for that kind of thing (CPU-only processing). I'd say it's time for you to invest in something like Transkoder.

Danne

  • Contributor
  • Hero Member
  • *****
  • Posts: 6698
How will that work with mlv files?
I agree on mlv structure and indexing not being suited for that kind of heavy batch work. Too bad indexing can't be skipped.

adrjork

  • Member
  • ***
  • Posts: 175
I'd say it's time for you to invest in something like Transkoder.
Hi Luther, thanks for the reply. It seems that Transkoder doesn't handle MLV codec. I've seen an mlv-to-mov software for Windows that – they say – works on GPU (here), but I'm stupid (too stupid) because, ab illo tempore, I formatted my mlv-storage with HFS+ (if I had exFAT I could switch to Win if required), and also I've sold my Nvidia cards... Another option is slimRAW but it seems limited to MLV-to-cDNG (no MOV, no proxy).
So, up to now, the most realistic option for me remains MLV-App.

Luther

  • Senior
  • ****
  • Posts: 317
Quote
How will that work with mlv files?
Quote
It seems that Transkoder doesn't handle MLV codec.
Same way people work with MLV on Resolve: transcoding to CDNG. Transkoder can take full advantage of GPU using CUDA. Might be expensive, but I would seriously consider it if I needed to process multiple terabytes of data.
Quote
I formatted my mlv-storage with HFS+
Consider using ZFS instead. This is what most guys processing large amounts of data use. Windows and OSX has support to it (don't know how well these projects work, since ZFS was built for linux/bsd primarily).
Quote
I've sold my Nvidia cards
The older nvidia architectures are very cheap nowadays. I've brought my nvidia 1050 for about $120. I think that's a fair price for an era where people pay $400 for apple wheels...

adrjork

  • Member
  • ***
  • Posts: 175
Thanks Luther for your advices.
transcoding to CDNG.
Unfortunately, I haven't enough free space to convert all the clips to DNGs. My idea is to convert the clips directly to a proxy code just to experiment various editing hypothesis, and only once I'll know which are the clips I actually need, I'll convert them few MLV-to-DNG. But now I need a MLV-to-MOV direct solution.
Transkoder can take full advantage of GPU using CUDA.
I said sadly goodbye to Nvidia one year ago (I sold 3 Titan X GPUs) because it seems that latest Davinci needs a recent OSX, but Mojave can't work well with Nvidia, right? So I went for a couple of Radeon VII GPUs.
Consider using ZFS instead.
That's interesting! But how can I re-format my external storage without deleting my mlvs inside??? Is it possible???

Luther

  • Senior
  • ****
  • Posts: 317
Unfortunately, I haven't enough free space to convert all the clips to DNGs.
See: https://www.magiclantern.fm/forum/index.php?topic=13152.0
Quote
once I'll know which are the clips I actually need, I'll convert them few MLV-to-DNG.
MLVApp is great for previewing. Only exporting is painfully slow sometimes.
Quote
Davinci needs a recent OSX, but Mojave can't work well with Nvidia, right?
As far as I know, that's not true. But you could use some Linux distro like Debian. It has Nvidia drivers and Davinci works on it (faster than on Windows/OSX). Not surprising that Pixar uses Debian on their render farm...
Quote
So I went for a couple of Radeon VII GPUs
I like AMD too. I have a AMD CPU. AMD is more cooperative with open source community than Nvidia/Intel. But unfortunately CUDA is way ahead of OpenCL, and that is a Nvidia specific feature. For heavy processing like you're doing, Nvidia is the only solution that is cost-effective.
Quote
How can I re-format my external storage without deleting my mlvs inside??? Is it possible???
This is called "in-place filesystem conversion". I don't think it is possible to do that from HFS+ to OpenZFS though. You can read (not write) HFS+ on Windows using this tool (it seems... never tested). On Linux you can read and write on HFS+ filesystems.
Best solution would be to get new HDDs (highly suggest WesternDigital instead of Seagate), then copying and erasing the old HDDs. This way you ensure everything is in its right place, with a fresh filesystem. ZFS is great for large amounts of data. You can also consider doing RAID mirroring.

adrjork

  • Member
  • ***
  • Posts: 175
Luther, your reply is hugely informative! Many thanks.
I understand your point. Thinking a migration to Debian is now a bit problematic for me (various things hold me on OSX, not only Davinci), but surely for a next project I'll think about it, seriously.

Milk and Coffee

  • Freshman
  • **
  • Posts: 93
Does MLVapp automatically set black & white points for the footage depending on its bit depth? Or should we be doing that manually?

Also should I always be doing something in the “RAW Correction” panel? if the stream looks good, then I can disable it yes?
Gear: Canon 5D Mark II

reddeercity

  • Contributor
  • Hero Member
  • *****
  • Posts: 2231
I'm using an hackintosh with an i7-extreme
......
8-HDDs 48TB RAID6 thunderb.2, that is only “relatively” fast, but surely not fast as even a single SSD.
Me the same as you "hackintosh" i7-3770K overclocked to 4.8GHz with a ATTO PICe Raid ExpressSAS card  (2TBx4=8TB) in a raid 5 6TB total with a spare drive
I tried raid 6 was too slowww , (around 350MB/s) changed it to Raid 5 (3 disk + 1 spare) and got around 800-1200MB's on a empty drive .
I keep the raid no more then 50% full after that is slows down too much to 600MB/s but above that I can maintain (30-40%full) 800MB/s
So if you can It would be really better to change to raid 5 from raid 6

I also use (Cross platform mac/pc) FreeNAS (home built) with 6drive Raid5 4TB confirmation in ZFS over 1Gb network connection (100-130Mb/s)
and I edit with FCPX , I leave the whole project on the NAS box proxies and all and don't notice any slow down .
Even use the NAS box raid 5 to mount MLV's with MLVFS and import to After Effect(CS6) or BM Resolve without issue

My 2 cent worth  :D

masc

  • Contributor
  • Hero Member
  • *****
  • Posts: 1616
@adrjork: sounds like your computer is a tiny little bit faster than mine and your projects are also a tiny little bit bigger than mine.  ;D The projects I usually render are about 0.5TB with 300..500 clips in one session. Sry, but I can't reproduce anything in these (your) dimensions. Anyway, I don't see any reason why the app should slow down after 200 clips. It should need longer for bigger clips - for opening, for creating MAPPs and for exporting - that is expected. It doesn't care how many clips you load, because MLVApp always handles exactly one clip at the time. So if it is the only app, this one instance gets 100% of the ressources for one clip.
There are differences in loading and processing e.g. if you use lossless compression - it always slows down, because lossless decoding is always single threaded. If you use clarity, highlights & shadows, RBF * sliders ( -> if sliders are different from 0 ) - those functions are also single threaded. DualISO is single threaded. All other stuff is multithreaded. Thanks for your test how much difference in processing time you get with how much instances of the app. I don't have any 10 core system here, so I can't test with so much cores. Multiprocessing has always some overhead for creating and for collecting threads. Interesing how big this overhead really is.

Does MLVapp automatically set black & white points for the footage depending on its bit depth? Or should we be doing that manually?
No. The white and black point is set by ML in the cam. MLVApp reads this metadata and allows to adjust it in the case it wasn't correct.

Also should I always be doing something in the “RAW Correction” panel? if the stream looks good, then I can disable it yes?
You should do what is necessary. If all is fine without RAW Correction, you don't need it.
5D3.113 | EOSM.202

Milk and Coffee

  • Freshman
  • **
  • Posts: 93
No. The white and black point is set by ML in the cam. MLVApp reads this metadata and allows to adjust it in the case it wasn't correct.

In the ML menus, the black level is always set to "0" even if I change the bit depth. Is it still setting the correct value in the metadata then?
Gear: Canon 5D Mark II

adrjork

  • Member
  • ***
  • Posts: 175
Wow guys, really thank you both reddeercity and masc for your fantastic informations!

@reddeercity: Your system blows my mind: very smart! And FreeNAS is very interesting!!! Thanks also for the tip on raid5 vs raid6. Surely my current-project-RAID is slowed down by both the fact that it's full as an egg (48TB in raid6 is 36TB and I have more than 33TB of data inside...) and the configuration raid6 itself. I wanted raid6 because for this project I was terrified of losing data, and this indirectly is also one of the reasons I'm avoiding mlvfs: I want to do my proxy-edit (and the future DNG-edit) without the external-raid always turned on (for extending its life, and avoiding the noise of its fans. :) The proxies (now) and the definitive selected DNGs (future) will be placed onto an internal 8TB 4-NVMEs raid-0 (HPT with 4x EVO-2TB drives) that is my secret weapon ;) together with the two Radeon VII GPUs (I've tested grading uncompressed 14bit DNGs with a bunch of nodes, temporal denoise, effects... and the preview in Davinci goes always in real-time! Like a boss 8)

@masc: yes, I was surprised by the “limit” of about 220 long clips before the uploading slows-down. It's strange. Anyway, that's what I did to make the job:
1. I uploaded “packs” of 220 long clips in MLVApp (it took me around 2 minutes per-pack) and I simply created the MAPPs (more than 30 minutes per pack, so it has been a looong 12-hours work-day);
2. I opened 10 instances of MLVApp and I uploaded 334 long clips into each instance (I have a total of 3340 clips to be converted);
3. With all the instances standing, I ran the first just for letting it recap all the clips, and once it started the actual conversion I aborted;
4. I repeated point 3 with every other instance;
5. After all the recaps were done, I ran all the instances together (Activity Monitor says 'round 160% of CPU for each instance).
Now my sweet hackintosh is working... alone... in the darkness of its tiny bedroom, dreaming (perhaps) a magic lantern. We'll meet again in three days. Good night :)