(Linux/Mac) convmlv: Featureful MLV Developer

Started by so-rose, March 05, 2016, 12:10:29 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

so-rose

Hello,

This is, simply enough, an offline image processing tool converting ML formats (RAW, MLV, DNG sequences) to many usable formats (EXR, DPX, and Prores). It runs on Linux and Mac, and focuses on features and output quality (More in the Features section).

It's essentially a huge swath of glue between a bunch of great tools.

I use it myself - these two short films were developed, graded, and edited in a single weekend, made possible due to convmlv (v1.7)!

http://youtu.be/yi-G7sXHB1M
http://youtu.be/yi-G7sXHB1M

Huge thanks to @Danne and @bouncyball for figuring out the color management. Thanks to @dfort for his badpixels script and to @dmilligan for MLVFS! Another huge thanks to the whole Magic Lantern community for the tools without which nothing here would be possible

Download/Install

Getting convmlv on your machine goes like this:

  • Download the release tarball for your platform and extract it.
  • Optional: Download the .sig file and verify the tarball integrity (so you know that nobody has modified the software from me to you)
  • Install dependencies - the bundled docs.pdf.

Latest Release: https://github.com/so-rose/convmlv/releases/tag/v2.0.3 <-- Download here!
Repository: https://git.sofusrose.com/so-rose/convmlv <-- The cutting edge!
GitHub Mirror: https://github.com/so-rose/convmlv <-- A simple mirror to GitHub.

Dependencies on other ML tools are bundled in the release. Refer to the help page (under MANPAGE, or run './convmlv.sh -h') for links to these.

Make sure all binaries, and the script, have execution permissions (run 'chmod +x file'), otherwise convmlv will fail! It will tell you what it's missing & where to get it if you try!

General Info

Documentation: You can find an up to date tutorial/documentation PDF I made under 'docs/docs.pdf' in the repository. It's also bundled in the release.

Here's the link to it: https://git.sofusrose.com/so-rose/convmlv/raw/master/docs/docs.pdf?inline=false


v2.0.3: Some more bug fixes, based on a bug report.
*Fixed bug with split MLV files by piping the FRAMES output through "paste -s -d+ - | bc"
*Fixed the color-ext LUTs, which were unfixed due to laziness.
*Fixed mlv2badpixels.sh when mlv_dump wasn't on the PATH. Requred modding the script; a patch is in the main repo, and is distributed alongside binaries.
*Added documentation that symbolic linking support is required.

Full changelog can be found in the repository, under CHANGELOG.


Bad Pixels File Example: You can find a sample .badpixels file in the repository (the Download link), which you can adapt to remove bad pixels (which all cameras have) from your footage.

Darkframe Information: Read more about darkframe subtraction here: http://magiclantern.fm/forum/index.php?topic=15801.msg164620#msg164620

Config File Examples: You can find example config files in the download repository, under configs/*.conf.

All command line options must go before the list of MLV/RAW files or DNG folders.

Features
convmlv is essentially a big piece of interface glue between many great image processing tools, whose features it in many cases inherits directly!

-Easy usage w/good defaults - specify -m, follow with a list of .MLV files to develop. (Type in terminal: convmlv -m *.MLV)
-Create ready to edit image sequences/video files in all kinds of losslessly compressed formats.
-Offline image quality with good, highly multithreaded performance.
-Develop a specific frame range easily. MLRawViewer is a great companion to find desired frame ranges.
-Complete control over the RAW process itself: Highlight reconstruction, demosaicing, color space, chroma smoothing, white balance, etc. .
-Color managed, with a variety of color space options. The philosophy is one of no quality loss from sensor --> output.
-Several noise reduction techniques, from wavelet to high quality temporal-spatial denoising, to astro derived darkframe subtraction, to experimental FFMPEG modules.
-Easy HDR (Dual ISO) processing, with the -u option.
-Easy bad pixel removal. The -b option (courtesy of @dfort) removes pink dots, and can be combined with our own .badpixels file mapping out the dead pixels on your camera.
-Since the output can be very heavy to edit with, it's simple to create edit-friendly color managed JPG/MP4 proxies alongside.
-Several FFMPEG filters (multiple 3D LUTs, temporal denoising, hqdn3d, removegrain, unsharp, and deshake currently - request more; they're very easy to implement).
-Reads Camera WB, but can also use a homegrown AWB algorithm or no WB at all.
-Extracts MLV sound to to a WAV, and metadata into a settings.txt file.
-Portable Bash script - Fully compatible with Linux and Mac (Windows is untested)
-Production-ready config file format lets you specify any option on a global or per-MLV basis in a single config, saving enormous amounts of time on a deadline.


Documentation/How To

Tutorial: You can find an up to date tutorial/documentation PDF under 'docs/docs.pdf' in the repository. It's also bundled in the release.

Here's a link to it: https://git.sofusrose.com/so-rose/convmlv/raw/master/docs/docs.pdf?inline=false

Help Page: The primary, most esoteric (but most updated) documentation is the help page. You can access it in docs/MANPAGE in the repo, partially in the code block below, or by typing 'convmlv -h' once it's installed.

Human Help: I tried to cover everything with the above, but if you're still having trouble I'm happy to help. Just respond to the thread or send me a PM somewhere.


Workflow

A typical task is to convert an MLV file to a high quality MOV file for editing. Doing so with convmlv is quite simple:

convmlv -m cool.MLV

Simple as that! My personal preference is to edit using losslessly compressed EXR sequences, like so:

convmlv -i cool.MLV

Of course, you have a very wide range of features available to aid you in this process. Here are a couple

convmlv -i -m -p 3 -C config.conf -s 25% -b -k -d 3 -g 3 -G 2 -o ./output test.MLV test2.MLV cool.RAW

I'll break this specific command apart:

  • -i: We will output a full-quality EXR sequence.
  • -m: We will also output a high-quality Prores 444 file.
  • -p 3: This proxy mode will generate JPG sequence and MP4 (H.264) proxies.
  • -C: We'll use a local config file, which lets us automatically specify options without having to type them out each time in the command.
  • -s 25%: Our proxies will be 25% the size of the original video.
  • -b: We'll remove any egregious focus pixel issues.
  • -k: We will also output undeveloped DNG files.
  • -d 3: Our demosaic mode (a big part of why we shoot RAW) will be the high quality, but slower, AHD mode.
  • -g 0: My output file will have Standard Gamma (close to 2.2; gamut dependent. In this case, a Rec709 Gamma).
  • -G 2: My output file will have a Rec.709 Gamut.
  • -o ./output: We'll place an additional folder per converted shot/file inside of the folder ./output.
  • The Rest: These are all the shots/files (.MLVs, .RAWs, or a folder containing .DNGs) to develop. This works with bash wildcards; for example, to convert all MLV files in a directory, simply use *.MLV.


You can of course go crazy. This is a valid command:

convmlv -i -t 2 -p 2 -s 75% -r s-e -d 3 -f -H 2 -c 2 -n 50 -g 3 -w 0 -S 9000 --white-speed 5 -a ../../7D_badpixels.txt --threads 36 --uncompress test.MLV

Avoiding commands like this is the point of Config Files. Configs can specify options globally, but can even do different options depending on the file name!


Bugs
Known Bugs: See Issues in the Repository.

If possible, please report bugs as Issues in the main Repository (GitLab), instead of GitHub. I don't mind if you do it in GitHub; just, the other way makes things easier for me.

Future Ideas
Please see (or make) an Issue in the Repository!

Feedback
I'd appreciate any feedback (especially bugs)!
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

DeafEyeJedi

Impressive. PR4444XQ is a must. Is this for Mac or Windows?
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

reddeercity

Nice , Looks interesting .
QuoteBasically, it takes .MLV files as input and spits out any combination of TIFF or Prores 4444XQ files
Really ?
Not with FFmpeg , best quality is 10bit rgb(444) not 12bit rgb 444xq , can't be done .
Why not use MLP  ?

DeafEyeJedi

Ha that's what I thought. Anyway, I just asked @Danne if he could and try implement this so called script that can somehow export PR4444XQ into MLP, if possible?
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

so-rose

Ah :) somehow I was under the impression that ffmpeg would put out 4444XQ... I've fixed the description. Is there then any way (as in scriptable), on Linux, to create XQ files from an image sequence?

QuoteWhy not use MLP  ?

Well, mainly because (from what I can see MLP) is tethered to Mac OS. A penguin based workflow is something I need/want!
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

Danne

Nice script.
Since -a auto is applied to every file it,s not suitable for movie sequences as it will be non linear. I solved it by using averaged white balance from one file. Or actually MLP uses 8 files ranging from 1 - 450 and calculates the median value from these files.
MlRawviewer outputs highest FFmpeg prores quality possible. If you find a better output please compare your files with MLP and/or Mlrawviewer.
You can also pipe dcraw through standard output -c which makes creating intermediate to tiff files useless if main goal is to create a prores file.

dfort

@so-rose

So nice seeing someone making a contribution to ML on a first post. I had to have hundreds of questions answered before I gave anything back.

Of course you know that what's good for the penguin is good for other Unix-like environments like the OS-X and Cygwin.

I would suggest putting up your script on bitbucket like I did with my script to deal with focus pixels. Check it out, you might find something there that you can use in yours.

https://bitbucket.org/daniel_fort/ml-focus-pixels

Also check out Danne's MLP. Although it is running in a Macintosh Automator environment it is made up of a bunch of bash shell scripts passing arguments to programs like mlv_dump, raw2dng, dcraw, ffmpeg and others.

so-rose

@dfort

Thanks for the advice! The repo is live at https://bitbucket.org/so-rose/convmlv .

With regards to the focus pixel issue, I've never experienced it on the 7D - would it perhaps be worth exposing the dead pixels option (-P) available in DCraw, so that a person could use the referenced script? Example: dcraw <options> -P deadpixels.txt <more options>


With regards to MLP, I think I'll go study that for a bit! In the meantime my script can now use LUTs (a bit slowly, but it works :) )!
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

Danne

The dead pixel option is really working well for cameras with focus pixel problems but it works just as good for minor hot/dead pixels on other sensors as well.
http://www.magiclantern.fm/forum/index.php?topic=16054.msg163713;topicseen#msg163713

dfort has a high end script which can be used to automate the pixel issue completely for the cameras with the focus pixel issue. It works best with MLV files but with some minor manual labour also RAW works. He even got it grabbing meta data info through hex numbers.
https://bitbucket.org/daniel_fort/ml-focus-pixels/src

By the way. White balance is hard coded to the dng coming out of mlv_dump so that is what dcraw will grab with -w. I use a wb template ranging from 1500-15000k (don,t get how to calculate AsShotNeutral values from MLV kelvin meta data. Check MLVFS sources for ufraw calcualations) and some easy calculation math for custom white balance. Auto white balance doesn,t apply to MLV meta data.

dfort

Quote from: so-rose on March 05, 2016, 09:41:05 PM
With regards to the focus pixel issue, I've never experienced it on the 7D - would it perhaps be worth exposing the dead pixels option (-P) available in DCraw, so that a person could use the referenced script?

You don't need to deal with focus pixels on the 7D, it only affects the EOSM/650D/700D/100D cameras. Making a "deadpixels" list for dcraw is pretty much all that my script does. Of course your script can easily invoke my script so if you get users that have one of these cameras your script will be able to remove the focus pixels.

I noticed that you made a comment about argument parsing and looking at your code it looks like you're working a little too hard on it. I used getopts which was super easy. Also note that your help function can be greatly simplified. I'm not an experienced programmer by any stretch of the imagination, just sharing a few tricks that I learned along the way.

so-rose

So I've been playing around with auto white balance algorithms over the past few days (even examining the MLRawViewer source code... In the meantime I've made the 'do nothing' option the default in my script.). Finally, I got a simple one working in Python (with averaging over a sequence) using PIL and numpy, with one caveat: It's limited to 8 bits!

So, does anyone have any ideas about how to read/write 16-bit pixels from a TIFF in either Python or C++ (given a pixel array, I can start to have some fun  :) )?

QuoteOf course your script can easily invoke my script so if you get users that have one of these cameras your script will be able to remove the focus pixels.

I'm pretty confident I can do just that (which seems like it'd be a big help to those experiencing the issue), but I lack any footage to test it on (any links on the forums are long dead) - @dfort is there any chance you have some lying around?

QuoteJust sharing a few tricks that I learned along the way.

I really appreciate it , thank you :) ! I noticed that you use a 'cat' to print your help - I might just follow suit! In regard to getopts, I'll have to play around with that; it looks much better than spamming 'cuts'!
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

dfort

@so-rose

Here's a short MLV shot on a 100D. Plenty of focus pixels on that camera!

https://www.dropbox.com/sh/iz9qsxfpypfghbs/AABPS8ipdz6gdj3-LD0F_GSRa?dl=0

Let me know if you want to play with some more MLV's with focus pixels. I still have some on my hard drive when I was working on the focus pixel maps and my script.

so-rose

@dfort

QuoteLet me know if you want to play with some more MLV's with focus pixels.

Well, thanks to your script, my script now eats through your sample! If you have any others, I'd love to try those out :) .

Otherwise I fixed some bugs, which were actually quite serious...
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

dfort

Sure--here's some more files with focus pixels. The ones of the test chart are with the 700D (mv1080) at various image sizes and I added one from the EOSM in crop mode (mv1080crop) racking the iris so you can see how the focus pixels appear and disappear depending on the color and brightness.

https://www.dropbox.com/sh/kkrx3k2a2hz8nl2/AADAQS5GoTBNPbfP9bFR4OAZa?dl=0

By the way--I noticed in your description that you're using Blender. Could you elaborate a bit more how you're using that program? I heard it can work with exr files--maybe there's a way to tap into that for Dual ISO and HDR video?

Blender is quite an amazing piece of software. I met with the developer when I visited Amsterdam and we talked about some of the possible uses for Blender in post production.

so-rose

@dfort

Yup, no focus pixels anywhere! It works! Thank you for the script, and the assistance! The fine print, of course, is that the better the demosaicing the less pink dots will appear in specular highlights.

QuoteI noticed in your description that you're using Blender. Could you elaborate a bit more how you're using that program?

Absolutely! To be honest, I use Blender for everything :) . Besides the obvious 3D aspects, I make frequent use of its compositor (which I've made node groups for many tasks) and video editor (which is surprisingly good) for creative film making. Workflow takes some fiddling, but the power is all there (the node-based compositor is extremely capable, to the point where you can implement many algorithms mathematically in it), for example it's perfect for when I want to take the take a simple RGB difference of two images and examine the result.

The reason I need this script to give me TIFFs/JPGs is that image sequences (as opposed to videos, even ProRes, of any kind) are very optimized. What I do is import both the tiff and the jpg into a "metaclip" (essentially it locks their timing relative to each other), then disable the slow tiff, using the proxy jpgs for fast editing. For color work/the render, I can reenable the tiff sequence seamlessly. I have screenshots of a current project, if you wish.

QuoteI heard it can work with exr files--maybe there's a way to tap into that for Dual ISO and HDR video?

The short answer is yes, Blender lives in EXRs (the standard in the 3D world) - Multilayered, compressed; all can be read and written. In terms of Dual ISO/HDR, I'm not quite sure what you refer to - if you're talking about HDR editing, then I'm 99.99% certain that Blender functions in 32 bit float space (seeing as it's able to write to 32 bit floats at full precision), meaning yes, it's a perfect tool for HDR editing :) .

Long story short, in my little suite of open source filmmaking tools (including Blender, Audacity, djv_view, Krita, and DispCal for monitor calibration), Blender is the all-powerful workhorse!



Otherwise, I finally got auto white balance (averaged over the sequence) working via my numpy-based Python implementation of Grey's World (the simplest AWB I could find). It's a little slow (it actually runs dcraw once, with fast settings, to generate the tiff sequence needed to run dcraw again with proper RGB scaling), but not unusably so!
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

DeafEyeJedi

5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

Danne

Are all processing going through tiff files coming from dcraw? Check this pipinng lines in that case. Coming from scrax good old raw2dng modified by me and put in MLP. You can even pipe it further going straight to FFplay for previewing.

In short (pipe description)
dcraw *.dng | ffmpeg output

find . -maxdepth 1 -iname '*.dng' -print0 | xargs -0 ~/Library/Services/MLP.workflow/Contents/dcraw $icc +M $br $dpf_a $any $ga -c -6 -W -q 3 $r $wb | ffmpeg $wav -f image2pipe -vcodec ppm -r "$fps" -i pipe:0 $sd $cod -n -r "$fps" -vf $sc"lut3d=$lut3d_1_MLV_e","lut3d=$lut3d_2_MLV_e","lut3d=$lut3d_3_MLV_e","lut3d=$lut3d_4_MLV_e","lut3d=$lut3d_5_MLV_e"$tbl -strict -2 "$output""$outputb/"$file.mov

Regarding white balance. Not sure if you actually have to develop the actual file to get to the multiplier numbers. Check this out.

vit_01e=$(~/Library/Services/MLP.workflow/Contents/dcraw -T -a -v -c ${BASE}_1_"$date_01"_000250.dng 2>&1 | awk '/multipliers/ { print $2 }')


dfort

@so-rose

You got some great stuff going on. It would be fantastic if you could start a new topic on how you use Blender. I think that several ML users would like to have an all open source pipeline but they can't find a suitable editing platform.

Good to hear that my script is penguin friendly. So there was no problem with dependencies? (xxd, exiftool, ???) I should learn how to list dependencies by looking over your script.

so-rose

QuoteRegarding white balance. Not sure if you actually have to develop the actual file to get to the multiplier numbers.

@Danne Doesn't little snippet of code you provided (which I think I'll spinoff to get the Camera WB multiplier) still develop the image, dumping it into stdout and looking at the verbose output? If it does, then there's no performance advantage: I still want to go through the sequence and average all the multipliers together to get a temporally coherent white balance. If it doesn't, well then it sounds like a great change to make!

Though, I did figure out an optimization for the AWB: If I want to get the average WB of all the frames, there's no need to develop them all - just, say, every 20th frame (you can specify it with an option). The speedup is significant!

QuoteAre all processing going through tiff files coming from dcraw? Check this pipinng lines in that case.

Well, I do actually want the .TIFFs, as they're nicer to Blender :) . However I agree that if someone doesn't want the TIFFs, they shouldn't have to go through them to get to the good stuff. I'm having a bit of trouble writing it though - indeed I don't know what I'm doing! Currently my test snippet looks like this:

VID="vid" #hardcoded for now
SCALE=`echo "($(echo "${PROXY_SCALE}" | sed 's/%//') / 100) * 2" | bc -l` #Get scale as factor, *2 for 50%
FRAMES=302 #hardcoded for now

#Pipe dcraw to stdout like usual. Tee it into the high quality/proxy encoders. Proxy scales correctly based on $PROXY_SCALE.
i=0 #hardcoded dcraw for now
for file in *.dng; do
xargs -0 dcraw -c -q 0 -r 1 1 1 1 -o 0 -4 "${file}" | \
tee >( \
ffmpeg -f image2pipe -vcodec ppm -r 24 -i pipe:0 -vcodec prores_ks -n -alpha_bits 0 -vendor ap4h -c:a copy -strict -2 "${VID}_hq.mov" \
) >( \
ffmpeg -f image2pipe -vcodec ppm -r 24 -i pipe:0 -c:v libx264 -preset fast -vf "scale=trunc(iw/2)*${SCALE}:trunc(ih/2)*${SCALE}" -crf 23 -c:a mp3 "${VID}_lq.mp4" \
) | echo -e "\c"
echo -e "\e[2K\rDNG to ProRes/Proxy: Frame ${i}/${FRAMES}.\c"
let i++
done



Any tips?  ???

QuoteSo there was no problem with dependencies? (xxd, exiftool)

@dfort Well, there might be... I (luckily?) had both of those tools installed (and just added them to the dependency list of the script).  What dependencies, exactly, does your script use, and is there a way to list them?

QuoteIt would be fantastic if you could start a new topic on how you use Blender.
I'd love to! Which forum would I do so in?

QuoteI should learn how to list dependencies by looking over your script.
Not much to say - I put 'em in a string by hand :) !



Thanks everyone for your comments! :D
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

Danne

QuoteThough, I did figure out an optimization for the AWB: If I want to get the average WB of all the frames, there's no need to develop them all - just, say, every 20th frame (you can specify it with an option). The speedup is significant!

That,s exactly what MLP does :)


Linux and mac should be close
find . -maxdepth 1 -iname '*.dng' -print0 | xargs -0 dcraw $icc +M $br $dpf_a $any $ga -c -6 -W -q 3 $r $wb | ffmpeg $wav -f image2pipe -vcodec ppm -r "$fps" -i pipe:0 $sd $cod -n -r "$fps"  "$output""$outputb/"$file.mov


find . -maxdepth 1 -iname '*.dng' -print0 | xargs -0 dcraw
Wildcard find will let bigger amounts of dng files pass in opposite to ls which breaks.

dcraw $icc +M $br $dpf_a $any $ga -c -6 -W -q 3 $r $wb
Settings to dcraw

| ffmpeg $wav -f image2pipe -vcodec ppm -r "$fps" -i pipe:0 $sd $cod -n -r "$fps"  "$output""$outputb/"$file.mov
pipe command to FFmpeg.
$wav=wave file
-r "$fps"=add the frames per second coming from dng metadata
$cod=codec(Prores4444)
"$output""$outputb/"$file.mov=output path



so-rose

QuoteThat,s exactly what MLP does :)

That means I'm doing it right! :)



@Danne But yes, I finally got the piping to work! Thanks for the assistance! What kept irking me was that I ended up having to pipe the -print0'ed find through a sort -z; the frames were essentially chosen at random before that. It's a little magical that this works!!

I also managed to run both the main and proxy encoder simultaneously when called for (couldn't figure out how to split the pipe), where vidHQ/LQ are functions:
cat $PIPE | vidLQ & echo "text" | tee $PIPE | vidHQ

I currently, however, have no idea how to read the FPS from the MLV or RAW file... For now, it's specified.


Implementing this, I redid the control flow a bit - you now don't have to generate TIFFs first; -i and -m specify image sequence/movie output, which I think was the most major quirk. So, the question is, what next   :-\?
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0

Danne

mlv_dump can provide you with fps information. Something like this should work.
fps=$(mlv_dump -v -m "$FILE" | grep 'FPS' | awk 'FNR == 1 {print $3}')

For RAW it,s a little different since it,s not applied to the dng metadata but it,s shown while processing with raw2dng. While extracing do this.
fps=$(raw2dng *.RAW ${BASE}"$date_08"_ | awk '/FPS/ { print $3; }')

You can get metadata information with exiftool
fps=$(find . -maxdepth 1 -iname '*.dng' -print0 | xargs -0 ~/Library/Services/MLP.workflow/Contents/exiftool | awk '/Frame Rate/ {print $4; exit}')

You can even grab the fps information through hex the way dfort is doing it with his pixel script.




dfort

Quote from: dfort on March 13, 2016, 04:30:34 PM
...It would be fantastic if you could start a new topic on how you use Blender. I think that several ML users would like to have an all open source pipeline but they can't find a suitable editing platform.

Quote from: so-rose on March 13, 2016, 10:33:57 PM
I'd love to! Which forum would I do so in?

Well there's the Post-processing Workflow area which has a couple of child boards that might also apply, Raw Video Postprocessing and HDR and Dual ISO Postprocessing. I'd say stick with the Post-processing Workflow area. My suggestion would be putting some screenshots to illustrate how you're using Open Source tools along with some step-by-step instructions sort of like my compiling tutorials.

BTW--a while back someone did a video tutorial demonstrating an HDR workflow with blender: http://magiclantern.fm/forum/index.php?topic=1034

dfort

Quote from: Danne on March 14, 2016, 07:01:48 AM
You can even grab the fps information through hex the way dfort is doing it with his pixel script.

It was a little experiment I was doing but here it is. Simply feed a RAW (not MLV) file into this script:

# 'magic' (192,4) 52 41 57 4d | width (188,2) 00 05 | height (186,2) d0 02 | frameSize (184,4) 00 a0 18 00 |
# Frame count (180,2) 28 00 | Frame skip (178,6) 00 00 01 00 00 00 | FPS (172,2) ac 5d | etc.

fps=`xxd -s -0xac -l 2 -p "$1"`
echo $fps
fps="${fps:2:2}${fps:0:2}"
echo $fps
fps=$((16#$fps))
echo $fps
fps=$(awk "BEGIN {printf \"%.3f\",${fps}/1000}")
echo $fps


Here's the message I sent to Danne about this:

Quoteit starts with a comment on where things are located in the footer. If you start at the end of the file and move back 192 bytes, those next 4 bytes (going forward now) make up the 'magic' number "RAWM"--188 bytes from the end and 2 bytes forward is the width and so on until you get to what you're interested in, the FPS which are the two bytes located 172 bytes from the end of the file. It got a little tricky because you need floating point math for the FPS and bash only does integer. I could have gone with bc but you're more comfortable with awk so I found a way to do it with awk. The echo statements just show the progression of the variable to get it into the format that you need. There's probably a more elegant solution to this but it works and figured that's good enough for now.

so-rose

@dfort That's some real neat code!

At the moment, I'm using Danne's suggestion (the fps=$(raw2dng *.RAW ${BASE}"$date_08"_ | awk '/FPS/ { print $3; }') statement) for getting the FPS from the RAW file. Though, I'm not a huge fan of intercepting raw2dng's output as it writes the dngs...

Quote...how you're using Open Source tools along with some step-by-step instructions sort of like my compiling tutorials.

Thanks! Now I gotta think about what to say/how to do this properly... Stay tuned :)



An open question: How does Dual ISO video processing work? And does anyone have any samples...? (My 7D refuses to cooperate; I think I need to put a new ML build in there)
convmlv - feed it your footage, it's safe I swear  -   http://www.magiclantern.fm/forum/index.php?topic=16799.0

openlut - recoloring your day, lut by lut  -  http://www.magiclantern.fm/forum/index.php?topic=17820.0