Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - so-rose

#26
Share Your Videos / Re: 50D RAW Dance Montage
March 21, 2016, 10:39:07 PM
That's some great work! And yeah, I get the frame skip issue on my 7D too when my cards are about full. If I may, how do you manage that (or do you just not use those clips)?
#27
One more question: How does footage archival work with mlv_dump? As in, compressing the MLV file (and being able to decompress it for use)?
#28
All right, so I got Dual ISO video to work. However, I'm getting what seems like blinking focus pixels. Here's the proxy video output from the 100D sample:

https://drive.google.com/open?id=0Bx0YaKfeuWSrLU9iY2tUZjJqZFE

I tried running mlv2badpixels.sh on the original MLV file, as well as on a developed (with cr2hdr) DNG, using $raw_buffer --> $video_mode (from @dfort's code) from the MLV to specify the -m option, but they both have the exact same artifacts as above. What the heck is going on?

Besides the blinking pixels  :-\ , the output looks good.

Thanks @Danne for exposing me to the -P option of xargs - I managed to parallelize image sequence and Dual ISO development :) . Things are much faster now.



Finally, the script can now output to TIFF, EXR, DPX, and PNG sequences, all automatically compressed if -c is specified. It's trivial to add more; any requests?  :D
#29
@dfort That's some real neat code!

At the moment, I'm using Danne's suggestion (the fps=$(raw2dng *.RAW ${BASE}"$date_08"_ | awk '/FPS/ { print $3; }') statement) for getting the FPS from the RAW file. Though, I'm not a huge fan of intercepting raw2dng's output as it writes the dngs...

Quote...how you're using Open Source tools along with some step-by-step instructions sort of like my compiling tutorials.

Thanks! Now I gotta think about what to say/how to do this properly... Stay tuned :)



An open question: How does Dual ISO video processing work? And does anyone have any samples...? (My 7D refuses to cooperate; I think I need to put a new ML build in there)
#30
QuoteThat,s exactly what MLP does :)

That means I'm doing it right! :)



@Danne But yes, I finally got the piping to work! Thanks for the assistance! What kept irking me was that I ended up having to pipe the -print0'ed find through a sort -z; the frames were essentially chosen at random before that. It's a little magical that this works!!

I also managed to run both the main and proxy encoder simultaneously when called for (couldn't figure out how to split the pipe), where vidHQ/LQ are functions:
cat $PIPE | vidLQ & echo "text" | tee $PIPE | vidHQ

I currently, however, have no idea how to read the FPS from the MLV or RAW file... For now, it's specified.


Implementing this, I redid the control flow a bit - you now don't have to generate TIFFs first; -i and -m specify image sequence/movie output, which I think was the most major quirk. So, the question is, what next   :-\?
#31
QuoteRegarding white balance. Not sure if you actually have to develop the actual file to get to the multiplier numbers.

@Danne Doesn't little snippet of code you provided (which I think I'll spinoff to get the Camera WB multiplier) still develop the image, dumping it into stdout and looking at the verbose output? If it does, then there's no performance advantage: I still want to go through the sequence and average all the multipliers together to get a temporally coherent white balance. If it doesn't, well then it sounds like a great change to make!

Though, I did figure out an optimization for the AWB: If I want to get the average WB of all the frames, there's no need to develop them all - just, say, every 20th frame (you can specify it with an option). The speedup is significant!

QuoteAre all processing going through tiff files coming from dcraw? Check this pipinng lines in that case.

Well, I do actually want the .TIFFs, as they're nicer to Blender :) . However I agree that if someone doesn't want the TIFFs, they shouldn't have to go through them to get to the good stuff. I'm having a bit of trouble writing it though - indeed I don't know what I'm doing! Currently my test snippet looks like this:

VID="vid" #hardcoded for now
SCALE=`echo "($(echo "${PROXY_SCALE}" | sed 's/%//') / 100) * 2" | bc -l` #Get scale as factor, *2 for 50%
FRAMES=302 #hardcoded for now

#Pipe dcraw to stdout like usual. Tee it into the high quality/proxy encoders. Proxy scales correctly based on $PROXY_SCALE.
i=0 #hardcoded dcraw for now
for file in *.dng; do
xargs -0 dcraw -c -q 0 -r 1 1 1 1 -o 0 -4 "${file}" | \
tee >( \
ffmpeg -f image2pipe -vcodec ppm -r 24 -i pipe:0 -vcodec prores_ks -n -alpha_bits 0 -vendor ap4h -c:a copy -strict -2 "${VID}_hq.mov" \
) >( \
ffmpeg -f image2pipe -vcodec ppm -r 24 -i pipe:0 -c:v libx264 -preset fast -vf "scale=trunc(iw/2)*${SCALE}:trunc(ih/2)*${SCALE}" -crf 23 -c:a mp3 "${VID}_lq.mp4" \
) | echo -e "\c"
echo -e "\e[2K\rDNG to ProRes/Proxy: Frame ${i}/${FRAMES}.\c"
let i++
done



Any tips?  ???

QuoteSo there was no problem with dependencies? (xxd, exiftool)

@dfort Well, there might be... I (luckily?) had both of those tools installed (and just added them to the dependency list of the script).  What dependencies, exactly, does your script use, and is there a way to list them?

QuoteIt would be fantastic if you could start a new topic on how you use Blender.
I'd love to! Which forum would I do so in?

QuoteI should learn how to list dependencies by looking over your script.
Not much to say - I put 'em in a string by hand :) !



Thanks everyone for your comments! :D
#32
@dfort

Yup, no focus pixels anywhere! It works! Thank you for the script, and the assistance! The fine print, of course, is that the better the demosaicing the less pink dots will appear in specular highlights.

QuoteI noticed in your description that you're using Blender. Could you elaborate a bit more how you're using that program?

Absolutely! To be honest, I use Blender for everything :) . Besides the obvious 3D aspects, I make frequent use of its compositor (which I've made node groups for many tasks) and video editor (which is surprisingly good) for creative film making. Workflow takes some fiddling, but the power is all there (the node-based compositor is extremely capable, to the point where you can implement many algorithms mathematically in it), for example it's perfect for when I want to take the take a simple RGB difference of two images and examine the result.

The reason I need this script to give me TIFFs/JPGs is that image sequences (as opposed to videos, even ProRes, of any kind) are very optimized. What I do is import both the tiff and the jpg into a "metaclip" (essentially it locks their timing relative to each other), then disable the slow tiff, using the proxy jpgs for fast editing. For color work/the render, I can reenable the tiff sequence seamlessly. I have screenshots of a current project, if you wish.

QuoteI heard it can work with exr files--maybe there's a way to tap into that for Dual ISO and HDR video?

The short answer is yes, Blender lives in EXRs (the standard in the 3D world) - Multilayered, compressed; all can be read and written. In terms of Dual ISO/HDR, I'm not quite sure what you refer to - if you're talking about HDR editing, then I'm 99.99% certain that Blender functions in 32 bit float space (seeing as it's able to write to 32 bit floats at full precision), meaning yes, it's a perfect tool for HDR editing :) .

Long story short, in my little suite of open source filmmaking tools (including Blender, Audacity, djv_view, Krita, and DispCal for monitor calibration), Blender is the all-powerful workhorse!



Otherwise, I finally got auto white balance (averaged over the sequence) working via my numpy-based Python implementation of Grey's World (the simplest AWB I could find). It's a little slow (it actually runs dcraw once, with fast settings, to generate the tiff sequence needed to run dcraw again with proper RGB scaling), but not unusably so!
#33
@dfort

QuoteLet me know if you want to play with some more MLV's with focus pixels.

Well, thanks to your script, my script now eats through your sample! If you have any others, I'd love to try those out :) .

Otherwise I fixed some bugs, which were actually quite serious...
#34
So I've been playing around with auto white balance algorithms over the past few days (even examining the MLRawViewer source code... In the meantime I've made the 'do nothing' option the default in my script.). Finally, I got a simple one working in Python (with averaging over a sequence) using PIL and numpy, with one caveat: It's limited to 8 bits!

So, does anyone have any ideas about how to read/write 16-bit pixels from a TIFF in either Python or C++ (given a pixel array, I can start to have some fun  :) )?

QuoteOf course your script can easily invoke my script so if you get users that have one of these cameras your script will be able to remove the focus pixels.

I'm pretty confident I can do just that (which seems like it'd be a big help to those experiencing the issue), but I lack any footage to test it on (any links on the forums are long dead) - @dfort is there any chance you have some lying around?

QuoteJust sharing a few tricks that I learned along the way.

I really appreciate it , thank you :) ! I noticed that you use a 'cat' to print your help - I might just follow suit! In regard to getopts, I'll have to play around with that; it looks much better than spamming 'cuts'!
#35
@dfort

Thanks for the advice! The repo is live at https://bitbucket.org/so-rose/convmlv .

With regards to the focus pixel issue, I've never experienced it on the 7D - would it perhaps be worth exposing the dead pixels option (-P) available in DCraw, so that a person could use the referenced script? Example: dcraw <options> -P deadpixels.txt <more options>


With regards to MLP, I think I'll go study that for a bit! In the meantime my script can now use LUTs (a bit slowly, but it works :) )!
#36
Ah :) somehow I was under the impression that ffmpeg would put out 4444XQ... I've fixed the description. Is there then any way (as in scriptable), on Linux, to create XQ files from an image sequence?

QuoteWhy not use MLP  ?

Well, mainly because (from what I can see MLP) is tethered to Mac OS. A penguin based workflow is something I need/want!
#37
Hello,

This is, simply enough, an offline image processing tool converting ML formats (RAW, MLV, DNG sequences) to many usable formats (EXR, DPX, and Prores). It runs on Linux and Mac, and focuses on features and output quality (More in the Features section).

It's essentially a huge swath of glue between a bunch of great tools.

I use it myself - these two short films were developed, graded, and edited in a single weekend, made possible due to convmlv (v1.7)!

http://youtu.be/yi-G7sXHB1M
http://youtu.be/yi-G7sXHB1M

Huge thanks to @Danne and @bouncyball for figuring out the color management. Thanks to @dfort for his badpixels script and to @dmilligan for MLVFS! Another huge thanks to the whole Magic Lantern community for the tools without which nothing here would be possible

Download/Install

Getting convmlv on your machine goes like this:

  • Download the release tarball for your platform and extract it.
  • Optional: Download the .sig file and verify the tarball integrity (so you know that nobody has modified the software from me to you)
  • Install dependencies - the bundled docs.pdf.

Latest Release: https://github.com/so-rose/convmlv/releases/tag/v2.0.3 <-- Download here!
Repository: https://git.sofusrose.com/so-rose/convmlv <-- The cutting edge!
GitHub Mirror: https://github.com/so-rose/convmlv <-- A simple mirror to GitHub.

Dependencies on other ML tools are bundled in the release. Refer to the help page (under MANPAGE, or run './convmlv.sh -h') for links to these.

Make sure all binaries, and the script, have execution permissions (run 'chmod +x file'), otherwise convmlv will fail! It will tell you what it's missing & where to get it if you try!

General Info

Documentation: You can find an up to date tutorial/documentation PDF I made under 'docs/docs.pdf' in the repository. It's also bundled in the release.

Here's the link to it: https://git.sofusrose.com/so-rose/convmlv/raw/master/docs/docs.pdf?inline=false


v2.0.3: Some more bug fixes, based on a bug report.
*Fixed bug with split MLV files by piping the FRAMES output through "paste -s -d+ - | bc"
*Fixed the color-ext LUTs, which were unfixed due to laziness.
*Fixed mlv2badpixels.sh when mlv_dump wasn't on the PATH. Requred modding the script; a patch is in the main repo, and is distributed alongside binaries.
*Added documentation that symbolic linking support is required.

Full changelog can be found in the repository, under CHANGELOG.


Bad Pixels File Example: You can find a sample .badpixels file in the repository (the Download link), which you can adapt to remove bad pixels (which all cameras have) from your footage.

Darkframe Information: Read more about darkframe subtraction here: http://magiclantern.fm/forum/index.php?topic=15801.msg164620#msg164620

Config File Examples: You can find example config files in the download repository, under configs/*.conf.

All command line options must go before the list of MLV/RAW files or DNG folders.

Features
convmlv is essentially a big piece of interface glue between many great image processing tools, whose features it in many cases inherits directly!

-Easy usage w/good defaults - specify -m, follow with a list of .MLV files to develop. (Type in terminal: convmlv -m *.MLV)
-Create ready to edit image sequences/video files in all kinds of losslessly compressed formats.
-Offline image quality with good, highly multithreaded performance.
-Develop a specific frame range easily. MLRawViewer is a great companion to find desired frame ranges.
-Complete control over the RAW process itself: Highlight reconstruction, demosaicing, color space, chroma smoothing, white balance, etc. .
-Color managed, with a variety of color space options. The philosophy is one of no quality loss from sensor --> output.
-Several noise reduction techniques, from wavelet to high quality temporal-spatial denoising, to astro derived darkframe subtraction, to experimental FFMPEG modules.
-Easy HDR (Dual ISO) processing, with the -u option.
-Easy bad pixel removal. The -b option (courtesy of @dfort) removes pink dots, and can be combined with our own .badpixels file mapping out the dead pixels on your camera.
-Since the output can be very heavy to edit with, it's simple to create edit-friendly color managed JPG/MP4 proxies alongside.
-Several FFMPEG filters (multiple 3D LUTs, temporal denoising, hqdn3d, removegrain, unsharp, and deshake currently - request more; they're very easy to implement).
-Reads Camera WB, but can also use a homegrown AWB algorithm or no WB at all.
-Extracts MLV sound to to a WAV, and metadata into a settings.txt file.
-Portable Bash script - Fully compatible with Linux and Mac (Windows is untested)
-Production-ready config file format lets you specify any option on a global or per-MLV basis in a single config, saving enormous amounts of time on a deadline.


Documentation/How To

Tutorial: You can find an up to date tutorial/documentation PDF under 'docs/docs.pdf' in the repository. It's also bundled in the release.

Here's a link to it: https://git.sofusrose.com/so-rose/convmlv/raw/master/docs/docs.pdf?inline=false

Help Page: The primary, most esoteric (but most updated) documentation is the help page. You can access it in docs/MANPAGE in the repo, partially in the code block below, or by typing 'convmlv -h' once it's installed.

Human Help: I tried to cover everything with the above, but if you're still having trouble I'm happy to help. Just respond to the thread or send me a PM somewhere.


Workflow

A typical task is to convert an MLV file to a high quality MOV file for editing. Doing so with convmlv is quite simple:

convmlv -m cool.MLV

Simple as that! My personal preference is to edit using losslessly compressed EXR sequences, like so:

convmlv -i cool.MLV

Of course, you have a very wide range of features available to aid you in this process. Here are a couple

convmlv -i -m -p 3 -C config.conf -s 25% -b -k -d 3 -g 3 -G 2 -o ./output test.MLV test2.MLV cool.RAW

I'll break this specific command apart:

  • -i: We will output a full-quality EXR sequence.
  • -m: We will also output a high-quality Prores 444 file.
  • -p 3: This proxy mode will generate JPG sequence and MP4 (H.264) proxies.
  • -C: We'll use a local config file, which lets us automatically specify options without having to type them out each time in the command.
  • -s 25%: Our proxies will be 25% the size of the original video.
  • -b: We'll remove any egregious focus pixel issues.
  • -k: We will also output undeveloped DNG files.
  • -d 3: Our demosaic mode (a big part of why we shoot RAW) will be the high quality, but slower, AHD mode.
  • -g 0: My output file will have Standard Gamma (close to 2.2; gamut dependent. In this case, a Rec709 Gamma).
  • -G 2: My output file will have a Rec.709 Gamut.
  • -o ./output: We'll place an additional folder per converted shot/file inside of the folder ./output.
  • The Rest: These are all the shots/files (.MLVs, .RAWs, or a folder containing .DNGs) to develop. This works with bash wildcards; for example, to convert all MLV files in a directory, simply use *.MLV.


You can of course go crazy. This is a valid command:

convmlv -i -t 2 -p 2 -s 75% -r s-e -d 3 -f -H 2 -c 2 -n 50 -g 3 -w 0 -S 9000 --white-speed 5 -a ../../7D_badpixels.txt --threads 36 --uncompress test.MLV

Avoiding commands like this is the point of Config Files. Configs can specify options globally, but can even do different options depending on the file name!


Bugs
Known Bugs: See Issues in the Repository.

If possible, please report bugs as Issues in the main Repository (GitLab), instead of GitHub. I don't mind if you do it in GitHub; just, the other way makes things easier for me.

Future Ideas
Please see (or make) an Issue in the Repository!

Feedback
I'd appreciate any feedback (especially bugs)!