Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - DFM

#76
General Help Q&A / Re: Eye-fi advice
September 04, 2012, 10:05:30 AM
It depends if the eye-fi card is an X2 version (which I suspect yours will be if it's new). The older non-X2 cards are reported to work fine with ML, but the X2s don't set boot flags so if the camera's nvram is expecting to boot from SD (after installing ML using a normal card) it simply won't start. You can't even get the Canon firmware to load properly, you need to uninstall ML before swapping to the eye-fi card then re-install it afterwards (not only is that a pain, it raises the question of how many times we can safely rewrite nvram).

See http://www.flickr.com/groups/magiclanternfirmware/discuss/72157627990119618/#comment72157627879367185

I don't see much hope for fixing X2s in the short term as eye-fi would have to change their products (it may be fixable by a firmware update to the eye-fi card itself, but there's no commercial argument for doing so, given how few of their customers want to run ML). People have asked them and gotten nowhere.
#77
Post-processing Workflow / Re: Iridas Speedgrade
September 03, 2012, 04:03:03 PM
Further to my comment about dual-screen setups and hardware color wheels, you can see a typical colorist's setup in the first minute of this wonderfully-Dutch video:

http://tv.adobe.com/watch/learn-speedgrade-cs6/fxphd-fastforward-speedgrade-cs6-fundamentals/
#78
Post-processing Workflow / Re: Iridas Speedgrade
September 02, 2012, 01:17:38 PM
/ disclaimer - I work on this stuff, but I'm not paid to talk about it  :) /

Adobe bought Iridas so SpeedGrade is indeed now part of CS6 - it's available as a standalone product and is bundled in Production Premium, Master Collection and Creative Cloud.

To ingest Quicktime files you need QT 7.6.6 or later installed. I've tested with files from EOS cameras and they behave fine (with and without audio, from native and ML firmware, etc - if QT can play it, SG can eat it). If you send footage from After Effects or Premiere Pro to SpeedGrade, all the footage (whatever it was originally) is converted to an image sequence.

You can export from SG to KONA-AJA 10-bit MOV files (R10g log or R10k linear), or to Cineon/DPX/TGA image sequences. Personally I'd advise sending to DPX, especially if you're on Windows. AJA R10 is really there to support users who have KONA hardware, and their latest Quicktime playback codec is only available for OS X. Bear in mind that R10g is not actually logarithmic, it's simply the version of R10 which uses bits 0-1023. You can put log-graded data inside or not, it doesn't care.


@weldroid: I agree the UI is strange to say the least, though that will change in time. Right now it's the same as it was when Iridas sold it, and their customers were all professional colorists who use dual cards (with the control panels and waveforms on one screen and the video output on a calibrated monitor). Putting it all one one screen, things don't fit very well. The concept of how you work (with grading layers) is also strange to people coming from an NLE background who think of 'effects' applied to the video layer itself, but it's familiar to colorists. Often they won't even use the on-screen controls, they'll have hardware color wheels on the editing table.

Grading-wise if you really have the time to learn it SG is powerful, but for quick fixes I'd not bother - the luma curve, fast and 3-way color correctors in FCP, AE or PP are adequate for most people most of the time. The one thing I use SG for a lot is the auto-calibration tool; if you show a Macbeth card at the start of your footage, SG will automatically work out a 3D LUT to pull every chip into calibration, so you can shoot any combination of hardware or image settings and get back to 'reality' in one click. Comes in handy for the many flat/log profiles for EOS that don't come with an inverse LUT, and to match footage from two different cameras when someone forgot to select the right profile!  ::)
#79
Technically speaking you can't shoot video 'tethered' - the data rate is way too high for the USB port to manage, so while you can start and stop video recording via the software, the footage itself is always written to the SD card.

Adobe Lightroom 4 does not support all Canon DSLRs in tethered mode, as Canon doesn't tell anyone what the USB control protocols are - the LR engineers are in the same boat as the ML developers in that regard. Concentration has been on the 1D family as they tend to be most popular with studios, so right now we don't have support for the 5DmkIII, 600D or 650D and I can't comment on when they will be added.

Full list is here: http://helpx.adobe.com/lightroom/kb/tethered-camera-support-lightroom-4.html
#80
General Help Q&A / Re: Video external record
August 28, 2012, 07:03:50 AM
To record onto an external device you need to capture the output of the HDMI port, not the USB port. That will require additional hardware, either a self-contained HDMI recorder (Atmos Ninja, etc) or an interface unit (Matrox Mini, etc), both of which cost a heck of a lot more than 70 bucks.

The critical thing is that in standby the HDMI signal runs at 3:2 aspect to match the Live View screen, so you only get 1620x1080i instead of 1920x1080p. Cropping back to 16:9 the best you can get is 1620x910i. When recording you only get 480i with a usable image of 640x388. Because it's all interlaced - there's no way to get progressive or raw output - the visual resolution is even smaller, and you'd need an external monitor.

The chip in a 550D was never intended to record for extended periods, and hoping to get 72 mins indoors without it shutting down or requiring mains power is really pushing your luck. All in all, DSLRs are not designed for this type of thing and you would be much better off hiring a regular video camera for the day (though to get 72 mins uninterrupted footage will be tricky even then as you're going to hit file limits on storage cards unless you use super-low quality settings). The normal solution would be to use at least two cameras so you can cut between them in post - if they're far enough away and mounted side by side it'll be hard to notice the cuts.
#81
It's not possible to handle ML's video format with CS6 out of the box, as quite frankly we never considered the idea of exposure-interleaved frames when designing the feature sets for PP and AE - nobody else does video HDR that way and Adobe isn't able to support niche applications; it's why there's a plugin API.

To split and merge the footage you will need to use AE as your first tool, as PP doesn't work on a per-frame basis. While it's possible to do the frame remapping by hand (duplicate the layer, remove frame #1 from the upper copy, time-stretch both layers to 200% speed, etc.) you won't be creating true HDR as the overlay options for layers in AE (add/subtract/screen/etc) don't understand the exposure data in each frame. The GingerHDR 'merger' tool is free and handles this step very well, assembling an EXR image sequence from your footage where each image is a true high-bit-depth file (see the video at http://vimeo.com/39086841 for a workflow). You can then bring the EXR sequence into PP, and since PP works internally using a 32-bit floating point buffer the shipped color grading and curves tools will cope just fine with the HDR frame data.
#82
The Yongnuo RF600/RF602 flash trigger comes with a 3.5mm shutter release cable that allows you to send full and half presses to a Rebel series camera.

Just tried them with the ML remap to half-press and it works perfectly.
#83
It depends which NLE you're using, but converting to a mezzanine codec such as ProRes is not about the bit depth as much as the decompression complexity. Bit depth is important if you're passing edits out to masters but not on the ingestion pass.

If for example you drop 8-bit Canon DSLR H.264 footage into Premiere Pro, the color and keying effects are applied in 32-bit floating point irrespective of what the source footage bit depth is - but scrubbing H.264 is a massive load on your CPU compared to scrubbing ProRes or DNxHD. In Premiere there's no advantage in quality terms to transcode beforehand, but a huge improvement in responsiveness of the track controls.

Some other NLEs retain the source bit depth on the timeline, so coloring an 8-bit H.264 will only operate in 8-bit even if the effect supports more. In those cases transcoding can cheat the NLE into applying their effects at a higher depth, but you're still interpolating data points that aren't in the original file so it can't work as well as the 10-bit raw you get from some brands of camera.

Quote from: vetec on August 21, 2012, 04:40:51 PM
I never try to decode canons .mov files to prores or DNxHD, but I have seen results. So called "bending" is much better when you work in 10bit color space.
#84
Purpose-built dolly carts tend to be very expensive for what they are, but a tripod dolly is the opposite - there are certainly some scary prices on the top brands ($500+) but the likes of Velbon and Fancier typically sell for under $40, and it'd be hard to make one yourself for that price.

You're correct, a top-line Steadicam operator can charge a decent rate - but it's a very specific market, they won't usually be flying DSLRs, and the DP expects them to come with gear. You're looking at $100k for a full rig excluding the camera.

For impromptu stuff in one spot when a full tripod isn't practical, I find a monopod with a fluid head (without the handle) is a good compromise. You can plant it on the ground or shorten the stick and tuck it into your pocket - allied with a neck strap you can be reasonably stable if you breathe right, and you're still able to focus, pan and tilt.
#85
Personally I prefer the Flycam styles, but the Nano is pushing it with a full DSLR setup on top, which is why they do a whole series (the 3000 is a better choice for the weight of a DSLR with pro glass, it'll even cope with rails and a FF).

One advantage of a vertical stick is that you can put stuff on the bottom sled, such as a battery pack and monitor (the more you put on the bottom the more you can then put on top). If weight becomes a pain - and it will - then you can always add an arm brace and vest.

As to the 'not using it much' - I think a lot of people new to DSLR video are tempted by the blog chatter about these things, but look at the motion picture sector in comparison. We fly things, but it's very much a last resort because it creates far more problems than it solves - sliders, dollies and cranes all come first. Flying only makes sense if the camera is tracking a fixed horizon at a fixed distance - you can pan with some care but you cannot tilt at all. You have no access to the camera controls, you can't zoom or pull focus without an electronic FF and balanced lens. You can't even look through a loupe, so outdoors you'll want an external monitor.

If you're standing still, you're always better off with a tripod or monopod. In a studio a dolly cart or crane are more flexible - I know cranes are expensive but given a decent floor you can dolly off anything with wheels (a DIY dolly cart is insanely simple compared to a DIY flyer). Flying works as a follow shot if the camera has to walk through a street or around a building and you can't use a cart for space reasons, but try walking around with your eyes fixed level; it's a strange viewpoint on the world so it only works for short cuts. Doesn't matter if you're using a Flycam or Steadicam, it always takes a while to balance these things so if time is an issue (e.g. event shooting) you may miss your only chance to capture the action unless you have multiple cameras in play.

Flyers have their uses; if you're filming interiors for real estate commercials it's going to be easier to get one through the doors, but buying one won't make your footage any better - just different - and it'll take a while to become proficient.
#86
The H1 is a great device but has limitations - the internal mics are crossed for stereo separation but it makes them terrible for directional recording on a camera - in the mix you are actively ignoring what you're pointing the device towards and enhancing the room tone and background clutter from either side of your scene. For a live gig and a locked FOH camera that may not matter as you'll be aiming each mic at the PA stacks, but as soon as you pan the camera your stereo mix will be ruined.

For voice work the mic should be as close as possible to the talent and aimed to reduce everything else, for example in a 2-up interview it could be placed standing up on the floor or table between them (junk on the table such as cups, vases etc are useful to hide behind) or hung from the ceiling (gotta love fishing twine). Lav mics will give better signal/noise and you can plug two into the Z1 by using a stereo-mono splitter cable (use the same brand so the levels match). A studio-quality lav is expensive, even more so with radio links, but the Z1 will work just fine in someone's pocket, plugged into a hardline lav mic. The cheapest ones are hideous but to get decent quality isn't expensive either - around $20 each.

If you're keeping the Z1 near the camera and you need to pan, I'd suggest mounting it to the tripod sticks instead of the camera rig (magic arm and a clamp). For live gigs if we're shooting handheld I tend to clip it to the ceiling beforehand and mains power it through the USB port (the battery dies long before the SD card is full).

Personally I never record audio on a DSLR - not only can you push the data rate higher with silent video, but the internal storage on the Z1 is far better quality and isn't limited to 15-min chunks (there's still a 4GB limit but you can stuff several hours into that). Without sync you'll need to clap each scene so there's something to line up (with a clapperboard or literally a clap of your hands in front of the lens), and it's important to either slate the scene* or match the date/time of each device so you know which audio file goes with which video. Slates are a pain to write out if you're shooting a live gig, but timecode matching is easy once you've got the hang of it. You don't have to be second-perfect, just close enough to tell which file goes with which. If there's only a couple of shots per session, simply count your claps (one for scene one, two for scene two, etc.)

*slating is a visible 'clapperboard' card showing the camera what's in the scene, but you must also read it out loud for the audio track.