MLVFS - a FUSE based, "on the fly" MLV to CDNG converter

Started by dmilligan, August 31, 2014, 02:01:24 AM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.


To windows users:
I ran into a problem with very high resolution footage using card spanning, with frames being corrupted.
It seems to be solved by updating to dokan v


"upgrading to the last 1.x version of dokan seems to have solved the issue... v"

what is this dokan for?


windows doesn't have fuse.  dokan is the alternative.  look at the first post on this thread for details.

the dokan link provided there is from 2014 and is a much older version.


Beginner here who's been using MLVFS recently for editing mlv files from my eos-m in Resolve and it is working just as I hoped. Except for focus pixels / pink dots in the video.
I tried to search the forum for solution, but could not easily find it (forgive me if I missed it). Only simple instructions I found were this:

Quote from: dfort on April 18, 2019, 09:21:50 PM
Realized that there are several newcomers trying to figure out how to use the focus pixel map files so here's a quick tutorial:

The easiest way to get the map files is to download the whole repository from the downloads page:

Take all of the .fmp files in the focus_pixel_map_files directory and place them next to the MLVFS binary. I'm on a Mac so it is located in my home directory > Library > Services > MLVFS > Contents (doing Control click on the MLVFS in order to get into the package contents). Never used MLVFS on Windows or Linux so I'm not sure how that works.

Same thing with MLV App, put the .fpm files next to the binary. MLV App is currently the best way to work with 5K Anamorphic files and yes, it is possible to export cdng using that application with fixed focus pixels and adjusted aspect ratio.

Either MLVFS or MLV App is a great way to get your raw footage into editing systems that can handle cdng files like DaVinci Resolve or After Effects. However -- only editing apps that use Adobe Camera Raw (like After Effects) seem to respect the resize and aspect ratio metadata in the cdng files so you'll have to adjust the image size if you want to use these dng files directly in Premiere or Resolve. You can also export to Prores or other several other file formats from MLV App and not worry about the resize/aspect ratio issues.

But the link is not working.
How do I remove these Focus Pixels in MLVFS now?

MLVFS Version: 24ebdf5 May 5 201718:56:15
Resolve 17
macOS Catalina


julien becker

Quote from: Danne on March 14, 2023, 04:41:18 PM
I have bash integration with Mlvfs through menu navigation in this app/workflow:

select "ml" from the root menu. Fuse from here:
Install apple silicon or intel dependeing on what machine you are running.

Thanks Danne for pointing to Switch, it's working very good. At least for non-dualiso files.
Perhaps I'm doing something wrong, but when I choose option (10) to render Dual ISO and than A to activate MLVFS, I get following message :
QuotePlease enter your selection number below:
ls: /tmp/no_files: No such file or directory
ls: /tmp/SCALETAG: No such file or directory
ls: /tmp/DUALISO/crop_rec: No such file or directory
fuse: unknown option `-n'

and the Finder seems to freeze, I need to restart.
Do you know why this happens ?


Dualiso is not very nice in mlvfs.
Rendering to prores is quite nice.


MLVFS for Dokany 2.1

- Install Dokany v2.1.0.1000

- Edit mlvfs.bat by changing the path to you MLV files (or just the letter of your SD card) in the argument --mlv-dir=E:\DCIM\100CANON\ or use it directly in a command prompt

Nvm, it produces white dots on the dark parts of the picture.

If someone need a workflow that works with the free Resolve version, I shared my files there:


So, I was thinking about the issue I have with MLVFS -- namely, that even with a super-fast NVMe drive, my CPU power (i5-8500) is not enough to guarantee real-time (24 / 25 fps) playback in DaVinci Resolve, when using MLVFS with 3.5K (3.7K in UHD terms), 14-bit lossless compressed raw footage. It seems close to. But not enough.

So I realized that, at least for the purpose of selecting shots / editing / color grading, half fps would probably be at my CPU's reach and would not cause so much bad experience for me. And when I say 'half fps' I mean skipping one frame every two.

I tried to understand if there is such an option in DVR, but apparently there is not. So I was wondering if this could be possible on MLVFS side: maybe some sort of option that would expose to the file system (and to DVR in the end) 1. an fps metadata keyword which is half the actual one and 2. one frame every two (say, only odd ones, but enumerated consequently -- 001, 002, 003 etc).

Do you think this would work?

(And: is there anybody out there being able to playback 3.5K 14-bit footage in real-time using MLVFS?)
5D3 for video
70D for photo


Sounds plausible to me.  I think the bigger problem is, who's going to do the work?  Last time MLVFS was updated looks to be 8 years ago?


I was expecting this objection ;)

My only hope is that 1. this new feature is not so difficult to implement (and it does not sound so), 2. some guy smarter than me at reading code is experiencing my same issue, and 3. he/she will read this post and think "Dang! Good idea! Let me implement this!"
5D3 for video
70D for photo


Nobody wanted to do it in the last 8 years.  I think it will be quicker if you learn how yourself.  No harm asking, of course.


Define 'quicker'

Anyway I might have some spare time when my sons are grown up, that will happen in a few years, will look into that then and let you know how it goes ;D

Apart from jokes, I took some C/C++ courses back at uni some 20 years ago and have been using Python for scientific programming for work since then. If somebody could at least give me a hint at where to start looking, and for what to look, I would even give it a shot.
5D3 for video
70D for photo


Quicker: taking less than 8 years :)

Having never looked at the MLVFS code, and never worked with FUSE either, I can only wildly speculate.  I assume MLVFS translates accesses to the MLV file, by indexing within the file to know where each frame is, checking what's being accessed, and turning that frame into DNG (or whatever, I've never used it).

So, assuming I'm guessing right, somewhere in there it maps access location to frame.  You'd want to change that mapping so accesses to frames 1 or 2 returned the data of the dng generated for 1, and you'd want to ensure there's caching.  I'd guess the FUSE layer does caching for you.

For example, if we pretend frames are at even 1000 byte offsets, and that 1000 bytes of MLV frame turns into 1000 bytes of DNG, reads from offset 10 and offset 1010 into the MLV should both return the data offset by 10 in the DNG generated for the first frame.


Thanks n_a_h.

So at the end of the day it looks to me as this should be even much simpler than that. I already verified that, by just replacing

for (int i = 0; i < frame_count; i++)
   sprintf(filename, "%s_%06d.dng", mlv_basename, i);
   filler(buf, filename, NULL, 0);


for (int i = 0; i < frame_count; i = i+2)
   sprintf(filename, "%s_%06d.dng", mlv_basename, i/2);
   filler(buf, filename, NULL, 0);

in the mlvfs_readdir() function of main.c, I get half the frames in the directory. Then I think that the function get_mlv_frame_number(), that returns frame number from the filename, can be easily hacked too. And then there I will need to understand how to tweak reported fps.

However, I realized that even leaving the code untouched, the MLVFS I can compile produces corrupted DNG frames. The only changes I was forced to do are related to FUSE (basically OSXFUSE.framework having been rebranded to macFUSE.framework somewhere in the past 8 years and FUSE header path having been moved from /usr/local/include/osxfuse/fuse to /usr/local/include/fuse).

Any clue? Anybody could successfully build a working MLVFUSE recently?
5D3 for video
70D for photo


Never mind, I just realized that I should try cedricp's fork. Will test in the w/e. Mmh, nah. Don't think that's the point.
5D3 for video
70D for photo


I use MLVFS with fuse for arm64 mac M1 installed. Integrated in script based program 'Switch'.
Png support seems broken.


Cool, thanks Danne, I compiled your version and now frames are fine.

I was using gitHub-based davidmilligan/MLVFS repo whose final commit is 9f81918 [May 31, 2016], while thanks to your hint I realize now that there is a bitbucket-based dmilligan/mlvfs repo that has a few further merges from bouncyball, dfort and g3gg0 up to commit 222f87c [Jul 23, 2018]. Guess something was fixed / improved / implemented there regarding lossless compressed raw MLV files.

Thanks, will work on this!

EDIT: anyone can explain gitHub-based cedricp/MLVFS commits dated Jun - Jul 2022? His work seems to be based on commit 9f81918 [May 31, 2016].
5D3 for video
70D for photo


What to explain about?
It would be great if you manage to speed up things.


Quote from: vastunghia on May 18, 2024, 06:48:29 PMEDIT: anyone can explain gitHub-based cedricp/MLVFS commits dated Jun - Jul 2022? His work seems to be based on commit 9f81918 [May 31, 2016].

I only skimmed this, and the commit comments aren't very helpful.  Some of the work is definitely based on the bitbucket dmilligan repo.  Just copy pasted, so you lose the commit history :(  You will want to cross ref these two repos, I would guess.  Some of the work is similar and for the same feature: lossless MLV support.  The cedricp repo also adds cmake build support (?) and moves a bunch of files around.  Perhaps this is important, perhaps not.  No explanation is given for why this is done.

Actually properly determining what these changes do is several hours work.  I'm not volunteering, sorry!


Appreciate your effort, thanks, somehow I came to simular conclusions —- I just hoped maybe the developer posted somewhere on this forum about his objectives and his progresses, and some ML hero member would recall the posts.

Guess I will skip that repo. I started playing around on my repo, will post when I have at least a PoC.
5D3 for video
70D for photo


So as promised here is my PoC: code and test release.

You get a 'Force half frames' option in the web GUI, and by enabling it you will see that resulting DNG files are being read by (say) DVR as half their actual fps, though still retaining their original duration (in a nutshell: one frame every two is being dropped, in a seamless manner).

What next:

  • Test if this makes any sense, i.e. if there is any real benefit in terms of playback smoothness (looks so, but not as much as I hoped, will need to make up a way to measure the benefit)
  • Test what happens when one switches the option on and off -- I think there is some caching going on, as DVR does not seem to notice changes for clips that had already played back, should find a way to clear the cache upon switching option, unfortunately I have no clue at what level this caching is happening (MLVFS? Fuse? OS? DVR?)
  • Come up with the best workflow to do editing / grading with the option turned on and then switch to rendering with the option turned off, in the smoothest possible manner
  • In light of the above points, consider whether it could be better to have at all times two FUSE mounts (or FWIW one single mount with two sub-folders), one with full-fps footage and one with half-fps, the latter to be used as a kind of a 'proxy' footage for former. This would make the switch between the two worlds just boil down to changing the source folder in DVR, and could bypass the caching issue.

I'm not expecting to raise much attention on this work, as I understand that not many people are using / are willing to use 3.5K 14-bit footage from 5D3 with MLVFS+DVR. However, any suggestion will be appreciated.
5D3 for video
70D for photo


Is it possible to speed up code in general? Multiprocess to get more CPU action? Seems dmilligan outlined a very good program but optimising still seems possible.


I think that if the OS asks for n DNG files simultaneously, Fuse will send n requests to MLVS, which in turn will trigger n simultaneous extractions of n raw frames from the MLV file. So imo MLVFS should be already parallel-ready in a sense.

I think that parallelization should be achievable via a proper scheduling of frame requests on the editor's side. Unfortunately, of course this is not possible unless you are in BlackmagicDesign's dev team. ::)

Guess that DVR asks the OS for next frame, waits for it, and then asks for the following. Which of course makes sense if the time needed to get it is just related to drive read speed and not to CPU time — which is the normal condition with actual, non-virtual DNG frames.

But I may be wrong of course. Would be happy to hear other opinions.

One more thought: in DVR, the first time I playback a clip it stutters a lot and sometimes stops. The second, it is much better. From the third on, it plays super butter smooth. So I think there must be some RAM caching going on somewhere. If it is on DVR side, I'm afraid we cannot do much with it. If it is on MLVSF / Fuse side, then we may try to force caching *before* DVR tries to read the whole DNG sequence. Damn, I have 64 gigs, I would love to fill it with cached DNG sequences!

Could be helpful to make MLVFS log single frame read requests with their corresponding timestamp, to try and understand the time sequence with which DVR asks for frames. And check if frame requests are repeated during the third playback, when all seems to be cached already. May give it a try as soon as I have 10 minutes.

EDIT: and apart from the above, if I get it right at the end of the day MLVFS just takes bits of Raw data and makes it available as Raw data, just in another format / container. So the code cannot be so much optimized I guess in any case. The only real processing MLVS does is decompression maybe? For lossless compressed MLV files — which are the format I use 100% of the time btw. So maybe that algorithm. Though even in this case I bet there is not much optimization that may be done.
5D3 for video
70D for photo