Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Murphy

#1
What about using the windows exe version under wine? Not ideal, but it could work.
#2
You might want to check whether it's actually shifting the colour, or whether QuickTime is just using a different icc colour profile to the software you view your tiffs in. My guess is that it's just the fact that QuickTime uses a 2.2 gama profile to display movies, compared to the standard 1.8 gama the system uses, rather than the shift actually being baked into the file on export/conversion.

Try copying one of the mov frames from the QuickTime edit menu, and paste it into photoshop, and compare it to the original tiff frame in photoshop. The actual rgb information in both files should be the same, and I'd put money on them looking the same, when viewed in the same colour space such as the photoshop environment.

A lot of program's have different colour profiles, and it sometimes does make it a nightmare to match things up when switching between QuickTime, FCP, AE etc. The print and web designers here know what I'm talking about :) But i doubt your mov files are actually loosing rgb data integrity (lossy compression not withstanding)

@Hazer would you be able to copy and compare some frames in photoshop of your motion>compressor method? My guess is that it's just adding the system's 1.8 gama profile to the mov file's metadata header, but if the raw rgb frame you copy from the motion exported mov into photoshop doesn't look the same as the original frame when viewed under the same colour space, the method you're using could potentially be changing (and destroying) rgb colour information in your file, and is probably not what you want happening, even though it superficially looks right in QuickTime compared to photoshop or the osx system picture viewer etc.
#3
Thanks for the kind words guys. Also thanks to scheng.pell again for the German translation and working out the bug. It didn't even occur to me that AE scripts would break depending on the language version.

kgv5, there's no reason i can think of why it shouldn't work in windows. The adobe scripting API is the same for both systems and should be platform independent (like python or JRE). If you want to give it a test, let us know if you have any problems.
#4
Hi guys. I've done some more work on the scrip, but some telco technicians cut my ADSL line when installing a phone into the apartment next door, so i've been without interwebs for a few days (still using my phone as a 3G modem right now) so I haven't been able to upload changes I've made.

@cineblah Thanks for the step by step. The script is kindof a pain, because its really just an OS instruction set which tells a bunch of other programs what to do, and those other programs can be a bit of work to install and set up properly.

@Scrax, I'll take a look at MLTools when i have full internet again. I'm kindof a bit pedantic about how my files are organised, and haven't really found many automatically organising apps that i'm comfortable with (one of the reasons i hate AVID is its database system) so i tend to write my own scripts to specifically fit my personal workflow wherever possible. As for integration into MLTools, i'm assuming that its cross platform? A lot of the things in this script are quite POSIX specific. I'm a Linux/OSX guy, and it just made sense to write it in bash given that its a set of instructions to run other programs, and requires a lot of OS specific tasks like mkdir and ls.

Windows natively lacks a lot of the terminal scripting functionality that POSIX systems like OSX and Linux have. You might be able to find other ways of achieving it by rewriting it to something more cross platform like python, or maybe getting it working with cygwin, but i don't think it would be easy to get this script ported to windows.

@Oedipax. I'll look at multicore support, but multicore is a fairly new thing program wise. If it was written from scratch in c++ i could use pthread or boost, but the script is in bash (which makes sense given its a relatively simple set of instructions to run other programs) and thus limited to what a POSIX terminal, and the programs it launches, are capable off.

The easiest way to multithread would probably be to create a loop that loads each DNG frame as a separate ufraw background process using &, but the problem is that if you have a sequence of 1000 frames, it could be horribly unstable to load 1000 parallel processes all at once. It would need to have a wait loop to check that it doesn't add too many processes at once. Maybe piping ps -ef into grep and counting the number of results would work. It will be a couple of weeks before i have time to try implementing that though (i'm out of town on a job next week).

For the numbering system, as far as i understand it, Magic Lantern RAW is basically still in pre-alpha stages of development. With any luck, when the raw module is finally merged into the mainline firmware, it will generate unique file names like regular MOV or CR2 files have, rather than starting at M000000 every time you format the CF card (which currently causes the script to overwrite files being put in the same scratch location).

In the meantime using the creation timestamp is a good idea. In the latest version i've actually gotten the DNG timelapses to be named based on their first and last frame, rather than a generated number.

<---------->

which brings us to the next part. The latest version of the script:

http://www.basetheory.com.au/wp-content/uploads/2013/06/canonimport-2013-06-02.zip

1) Cleaned up the code a little. Added more options at the beginning of the script for your choice of full res and proxy codecs. Also significantly cleaned up the terminal output (FFMPEG and UFRAW generate a lot of junk)

2) Changed the folder layout a little. Got rid of the MOV and Proxy folders as the Mac version of FFMPEG can apparently transcode to Apple ProRes422 which negates the need for lower res offline proxy files (at least for me anyway. If you think they should still be created, let me know), so now it just transcodes everything once, to ProRes (though you can change the FULLRES variable to your editing codec of choice).

3) Added support for HDR timelapse creation. This is dependent on the ML's bracketing though, as the script searches for the HDR .SH files generated by ML firmware in camera. If you just use standard Canon bracketing, the script will not be able to tell the difference between a timelapse and a HDR timelapse. It also requires enfuse to be installed.

4) Better algorithm for analysing timelapse and still shots. Also names timelapses based on the first and last frame in the sequence, rather than a generated ID number.

5) -h help and -v verbose options

Also, i've created a tweaked version of the script so that it works for my GoPro Hero3. Hopefully i'm going to eventually merge the scripts so that its one master script that imports and organises media from a variety of different cameras.

Thanks again guys. The feed back is greatly appreciated. Its nice to know other people find it useful, and i'm not just talking to myself on here :)
#5
For stills, I've really enjoyed using Darktable for processing raw photos http://www.darktable.org/ openSource which is a definite win in my book.

For DNG video sequences, I was using AE to process everything, but recently i've started using ufraw and ffmpeg to batch convert things for offline proxies, which is nice cause I can write a shell script to automate the process and go drink coffee while the computer thinks, but you're going to need something higher end like AE or Resolve for the online finishing. Unfortunately there isn't a lot of low end video editing software out there that will handle raw image sequences. Maybe Lightworks if EditShare ever get around to releasing the openSource version.

Will defs give RPP a look though.
#6
Thanks Oedipax. I hope you get some good use out of it.

I've made a few minor updates to the script. As well as a bit of code cleanup and organisation, I've slightly changed the folder layout. The script now separates the slow motion shots and standard video shots into separate folders.

I've also added a few lines of code to analyse and sort the CR2 files. In the previous script, all the CR2 files were basically treated as one long timelapse sequence which would be stitched together. The script now analyses the unix time stamp of each CR2 file, and if the gap between the current and previous frames is longer than the TLBREAK variable (35 seconds by default), it will treat the current frame as the start of a new timelapse, and organise the files accordingly.

Also, if there is a long timestamp gap on either side of a CR2 file, it will be treated as a standalone photo rather than part of a timelapse sequence, and will be moved to the Stills folder.

http://www.basetheory.com.au/wp-content/uploads/2013/05/canonimport-2013-5-29.zip
#7
Hey all. Just wrote a Mac OSX bash shell script, for importing video off CF cards from the canon.

http://www.basetheory.com.au/osx-canon-video-import-script/

I thought it was getting a bit silly, with the amount of messing around required to get all the different footage formats off one single camera (h264, h264 slowmo, cr2 timelapse, 14bit raw video) so i decided to write a script that would sort and convert it all for me at once, regardless of what combination of footage I'd shot that day.

Its a shell script which is daunting for some (and could probably stand to have a simple gui makeover) but it seems to do the job pretty well once the environmental variables are set.

Basically just plug in your CF card. Run the script. Go make yourself a coffee and wait.

It will:
1) create a folder in your scratch location based on the date of ingest.
2) convert 14bit raw video to DNG sequences
3) create mjpeg proxy videos of those DNG sequences for editing
4) determine the framerates of the h264 video
5) copy the standard framerate files straight over, and conform the slowmotion footage.
6) create mjpeg proxy videos of the h264 footage for editing
7) convert CR2 files to a DNG sequence (treating it as a timelapse)
8) create mjpeg proxy video of the DNG timelapse sequence for editing.

Let me know what you think. If you have any suggestions for improving it, comments and criticisms are welcome.

http://www.basetheory.com.au/wp-content/uploads/2013/05/canonimport.zip
#8
Thanks Mucher.

Naturalbornsamy, i just installed the trial version of After Effects CS6 on my mac, and the script worked without any errors, so I'm scratching my head as too what the cause could be. Exactly what version are you running, and are you using windows or mac? Did you run it with only the HDR video files selected in the project media browser window? Also, could you go in the "Effects & Presets" panel and check under the "Expression Controls" drop down that the "Slider Control" effect is there.
#9
Hmmm, if the script is running I highly doubt that its anything you did wrong.

I'm on an earlier version of After Effect, so my guess is that they may have slightly changed the code API between the versions, on how filter effects are added to layers.

The "Slider Control" is an effect which doesn't actually alter the layer in any way (kindof a dummy effect) but is useful for controlling things with expressions. The HDR script temporarily adds a Slider Control effect to the video layer, so it can run an expression to analise the pixels on frame 0 and frame 1 and determine which order the high and low ISO frames are in.

I'll see if i can get my hands on the latest version of After Effects and find out whats going on. Money is a little tight which is why i'm still on an older version, but i've been told they have a new monthly subscription thing which could make life a little easier.

I'll keep you posted.
#10
Hi all. So i started playing around with Magic Lantern's HDR video the other day, and decided to write a little After effect script to automate my workflow, and thought I should share it here if anyone is interested.

http://www.basetheory.com.au/magic-lantern-hdr-video-script/

Its a pretty simple script, and it should be totally platform independent as long as you have AE (for all those mac heads who've been going insane with wine and enfuse).

The script will:
1) create a new composition for your video file
2) detect whether the first frame is high or low ISO
3) split the frames to seperate layers
4) tonemap those layers
5) add a few effects for fine tuning

Also totally GPL, so feel free to hack it to pieces.

Comments and criticisms are welcome. If you can think of anything that would improve the script, leave a comment.