Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - DFM

Pages: [1] 2 3
1
Use Cineform - it's natively supported in CC2015.

2
Variable ND filters are polarizers - just two stacked into one unit. You cannot stack another polarizer on top, it'd just give you a dark blue/purple mess.

VNDs are simpler to work with for video as you can dial in any value, and if you only use them at the low range then they work OK, but at the maximum attenuation you'll get the classic X-pattern interference problem on wide shots - doesn't matter how much you pay, it's just how they work. The color casts can change as you adjust them, making CC in post an utter nightmare. VNDs are OK for run-and-gun shots where the lighting is constantly changing, but it's a trade-off.

Fixed NDs don't use polarizers so they have no patterning on wide shots and the color casts never change (even if you have a cheap set with pronounced casts it's easy to correct for using a reference photo of a Macbeth chart). Yes you have to carry a set or 3 or more, you can't dial in exactly 3.729 stops, and without a matte box they're fiddly to change, but their reliability and consistency is why professional DPs always use them. A set of three 77mm fixed NDs from Tiffen runs around $100 including a bag.

3
General Help Q&A / Re: Continuous recording 5D mIII
« on: April 14, 2015, 08:23:31 PM »
Quote
(I can only manage to film for 14 seconds).

That means you're shooting raw video, which cannot be saved externally from a 5D3. Field recorders plug into the HDMI port so they get 8-bit 4:2:2 data that's a slight step up from the internal H.264, but nowhere near the quality of RAW/MLV. It won't solve your problem of only getting 14 seconds of raw footage - for that you simply need a faster CF card - if anything it'll make it worse as turning on HDMI mirror mode increases load on the CPU.

4
Post-processing Workflow / Re: Methods for Flattening DNG Sequences
« on: April 09, 2015, 01:44:52 PM »
Process Version 2012 in Camera Raw has histogram-adaptive sliders for the tone controls (whites/highlights/shadows/blacks/clarity), so you must NOT use them on image sequences as the watershed between each section will change from frame to frame, causing flicker in the video. Exposure and Contrast are OK, as is the Tone Curve panel.

You could switch back to PV2010 if you really need to, but luma/chroma adjustments are best done with native AE effects (just make sure you're working in a 16 or 32-bit comp). Reserve the ACR interface for things like lens corrections, camera profile calibrations and some light-touch noise removal.

5
It should be OK, but you should try it out for levels first (you can do that anywhere, just get the distance about right and talk at it). The VideoMic doesn't have the +20dB gain switch that the Pro has, so you'll be cranking up the H1's input gain. In an ideal word the signal should be averaging around -12dB on the H1's meter, and peaking at -6dB - that way you have overhead for stuff like applause and laughter.

6
You'll need something to make the Rode point in the right direction - it's only semi-directional but won't pick up someone speaking four chairs away. Either get a small 'tabletop' tripod from eBay, or if they're too big to hide then you can use the plastic foot from a speedlite and a tiny ball-head cold shoe adaptor.

With your current gear you would have to move the mic between speeches, which I agree isn't a good plan. The only workable way around that (unless the speakers want to all stand in the same place, or they're using a handheld mic feeding the venue PA) is to rent a true shotgun mic and mount it at the back of the room on a light stand (or held above the crowd by an assistant) so you can point it directly at whoever's speaking. Lavs and bodypacks would give the best sound and are often used on the pastor during the ceremony (they're used to wearing them and it will pick up the vows too), but to cover the speeches you'd need a full set of bodypacks, and that's $$ even if you're renting. Swapping them between people isn't an option once the reception's started.

If the speakers are within a few chairs of one another, the rode would get better coverage mounted to the ceiling (45 degrees up in front of their table, with a long 3.5mm extension cord to the H1 so you can reach it to monitor levels and change batteries) - you'd lose out on signal/noise but avoid the shuffling of gear problem. It's vital to practice so you know it's going to work, and always do a soundcheck in the venue before anyone turns up. You don't get to do ADR on a bride.

Putting a separate desk mic in front of each person is the last-resort solution, but then you're turning it into a press conference...

7
The internal mics on an H1 are very sensitive to ambient - great for recording the atmosphere in a space, but terrible for capturing clean speech in a crowded room unless you turn the gain right down and hold it in front of your face. If the venue has PA, you could take a line-in feed from their mixing desk to the H1.

I'd suggest plugging the Rode into the H1 so you get clean, directional sound from a sensible distance away (on the table in front of the speaker, hidden behind flowers, etc.) and leaving your camera on auto-gain with the internal mic, just to pull a scratch channel for timing. An editor doesn't really need audio on the video at all, but being able to listen to the first sentence can help when you're sorting through all the files if you forgot to sync the clocks on both recorders.

As to slating, if you're lucky the start of your #1 issue standard pattern wedding speech is someone tapping a wine glass with a knife. Sync on that, or wait until the end and sync when the people sitting beside the speaker start clapping. It's not actually difficult to sync against plain speech, just takes a little longer to fine-tune the result.

8
Post-processing Workflow / Re: PP - AE - Dynamic Link issue
« on: February 27, 2015, 12:18:36 PM »
You should ask in the official After Effects forum.  https://forums.adobe.com/community/aftereffects_general_discussion

9
General Help Q&A / Re: Using Magic Lantern WITH Cinestyle?
« on: February 25, 2015, 12:01:53 PM »
It does make prefect sense, if you're using an external recorder. The HDMI output is always affected by the picture style, even when shooting raw.

When we're filming on a 5DIII the usual setup is MLV to the internal card and HDMI mirrored to a Ninja Blade (recording in ProRes as both a backup and for dailies). Even though the Ninja gets 4:2:2, using a flat picture style makes for much easier grading, and more realistic scopes on the Ninja.

By applying paired styles (VisionColor to the HDMI in camera and VisionLOG to the cDNGs in post), it's a lot easier to intercut footage if there's a glitch in the MLV, or apply a working grade to the dailies that's not far off the planned print.

10
Feature Requests / Re: ACES for Magic Lantern Raw Video
« on: February 11, 2015, 01:26:07 PM »
If we remeasure/recalculate we will come up with the same numbers.

And if you did create an independent set of values via your own measurements and called it "ML Standard", we wouldn't be having this conversation.

This is idiotic. By this logic I can't even use my operating system to make a copy of a DNG that has been through DNG converter, because I would be using it "to create other DNG files" with said matrix information embedded in it.

No, you're misunderstanding what I said. Adobe makes no ownership claim against any files you create with their products, but the algorithms used by Adobe software to create those files are copyrighted and/or patented, as are certain file formats (e.g. the structure of a PSD file is proprietary even though Adobe publishes the full specs). If you used Photoshop to create a panorama, you still own the rights to the stitched image - but you certainly cannot take the class library that does the stitching and compile it into your own program.

11
Feature Requests / Re: ACES for Magic Lantern Raw Video
« on: February 11, 2015, 11:31:45 AM »
There's a difference between editing a DNG file that already has matrix information embedded within it, and copying the Adobe Standard matrix supplied with the Camera Raw/DNG Converter software so you can use it to create other DNG files (or for that matter, mentioning the term "Adobe ###" in any third party product).

The EULA for the DNG SDK is different to that for Adobe's desktop software, it permits any use of the SDK's source code provided copyright notices are maintained - and that's why there are no matrices, lens profiles, etc. in the package. The EULAs for Camera Raw and DNG Converter are the standard versions which forbid any extraction, decompiling or reuse (clause 4.5b and 5.2).

I can't give you an exemption from any of this; you would need to negotiate terms with Adobe's legal department. In the first instance I would advise posting a detailed request on the DNG Community Forum (including explanations of the license terms for ML, where and how the matrices would be used, what you intend to call them, etc.) and I'll make sure it gets read by the right people.

12
Feature Requests / Re: ACES for Magic Lantern Raw Video
« on: February 11, 2015, 01:41:44 AM »
Sorry but the Adobe DNG SDK does not include matrices, just the C++ libraries required to interact with a DNG file object. All support files supplied with installations of Adobe Camera Raw or the Adobe DNG Converter utility are copyrighted and subject to Adobe's standard end user license agreement. They may not be extracted for standalone use or incorporated into any third party product.

13
Post-processing Workflow / Re: URGENT: Post-processing time estimate??????
« on: February 03, 2015, 03:40:12 PM »
There's no single answer.

RAW/MLV obviously fills cards much faster, so you will need to dump them on location or carry a large box of new cards. A lone shooter with two cards could be sitting for 20 mins waiting for a laptop to suck each one across a USB3 connection onto an external drive; if you have a suitcase of cards you don't care, and with a DIT it all happens while you're shooting the next scene.

RAW/MLV is fractionally less reliable than the standard 5D3 mirror solution with H.264 (recording to cards and a Ninja at the same time) so you may want to review every shot to make sure there are no torn frames. In that case you will be sitting at laptops for as long as you were sitting at the camera - or again hire a DIT.

Once you have all your footage back in the edit room, converting it to CinemaDNG can either take no time at all (with MLVFS) or a few minutes per 4GB chunk (with the other tools and a typical spec desktop machine). If you're not using MLVFS, just batch process them overnight.

Editing the footage is no different, but if you use proxies and relink to the cDNGs for final rendering, it will of course export much slower than an H.264 timeline. How much slower is impossible to say, it depends on your hardware and what effects you've applied. If you need an estimate, shoot 60 secs of something random in RAW and H.264 and run both through your production pipeline.

14
Raw Video / Re: Problems with workflow Photoshop RAW -> Premiere CC
« on: January 22, 2015, 02:10:32 PM »
Quote
Could someone tell me what's the best choice to use when exporting those .dng to edit in Premiere? h264. or AVI or something else?

If you have CC fully updated, then you have the Cineform codecs installed - they give one of the best performance/quality combinations on Windows machines. Alternatives are DNxHD (codecs are free from AVID) or ProRes on OS X. If you're not offloading files to an external studio it's a matter of personal choice which you use. There are very slight quality differences but if you're sending the final video to the Web or HDTV, it's not something your audience will notice. Disk space is often the deciding factor.

15
Hardware and Accessories / Re: Walimex Rigs
« on: January 12, 2015, 08:02:34 PM »
The "Walimex Rig Set Basic" is the same Chinese non-brand setup that world+dog sells on eBay for around €150 (Fotga, eimo, ProDSLR, etc). The thing they call a matte box just.. isn't. The rods are usable but the clamps tend to snap if you overdo things. Don't waste your money.

I agree you can easily spend four figures on a rig, but you need to work out what you actually need it to do (now and in the future). Do you intend to use the matte box as designed, to hold square filters, or just as a sunshade? Changing lenses often? Swing-away matte box would help for that. Room to attach a monitor, audio recorder, Atomos, V-lock battery, etc? Convertible from shoulder rig to tripod mount or caddy cage? Do you actually need a follow focus for the style of shooting you're doing? Rather than buying an all-in-one kit, most pros get a good quality but very basic rod system, then add components to build something to their own spec. It doesn't have to be hugely expensive if you steer clear of the top names.

No connection to them, but compare the Walimex stuff to the Filmcity FC-03 (around €300 on eBay) - that has a matte box that can actually take filters, and a counterweighted shoulder (you'll deeply regret a rig without a counterweight after the first 60 seconds). It's still plastic in places - you'd not get an all-metal rig for this price range - but I know which I'd prefer to use.

16
General Help Q&A / Re: Pink area on Highlights area
« on: December 29, 2014, 10:47:38 AM »
Buy a neutral density filter (either a set of fixed-value filters or a variable ND - the latter being more expensive but much more useful for video).

17
Duplicate Questions / Re: Camera RAW "Clarity" Flicker Problem ???
« on: December 20, 2014, 11:57:42 PM »
Clarity is just mid-range contrast expansion; and so it's adaptive to the luma spectrum of each image (as are the black/white/shadow/highlight sliders). Use curves instead.

18
Feature Requests / Re: Custom frame rate sync for telecine use
« on: December 18, 2014, 02:16:59 PM »
Quote
I've seen people suggest something similar for capturing frames one at a time as photos, but I definitely want to capture video files here, not wear out my mechanical shutter and mirror trying to capture miles of 8mm film, so this is a little different than those requests.

Capturing a frame sequence is how telecine works - you can't escape it. Trying to video a film gate just doesn't make sense. 8mm frames aren't really worth the effort of a DSLR-sized sensor or raw capture; a JPG sequence from an HD webcam is going to look just as good/bad and there's no mechanical shutter to worry about. The gate speed of a home projector does drift so you need to be sensing each frame if you want reliable end results.

For a DIY solution the projector will need serious conversion anyway (cold lamp, single shutter plate, variable drive, macro lenses, etc.) so adding an optical vane sensor that triggers a webcam is simple in comparison. Buy a purpose-built USB logic input board to watch the output pin of the vane sensor, and knock up a slice of Visual Basic or Python to grab a webcam photo each time the optical path is broken (these boards all come with programming samples and an API). With an LED lamp conversion you can roll at much-reduced speed, so the camera/computer have plenty of time to react. If you prefer soldering to coding, some webcams have a physical "take a snapshot" button that you could wire into, and an Arduino board with a camera module could handle the sensing/capture process all by itself - search for ArduCam, they have examples and there's even a camera board with raw output.

A bunch of people have built these types of DIY setup and documented their methods online in great detail - I wouldn't bother reinventing the wheel.

19
+1 for rental; every last ounce counts on transatlantic flights, and the US has a bunch of extremely good rental houses that'll ship anything from a gimbal to a dolly cart to your set, often cheaper than the step up to business class to get a decent baggage allowance, plus you know it's going to arrive safely (I've had bags lost both ways). If you're using studio lights it's the only way to go (not least because your EU bulbs will be expecting 230V). Suggest you check the listings on MandY.com

Unless you're on a backlot or have a very generous budget and can get every possible bit of kit "just in case", blocking every shot beforehand is essential so you know if you need a shoulder rig, dolly (with or without rails), etc, etc. They do look slightly different and a camera operator can often tell what's used in a scene, but the real deciding factor is ground surface vs. camera path. Dollies are great if you want horizontal sweeps on concrete, but there's a reason Hollywood never followed anyone up stairs! Gimbals will get you into tight spaces, but if the operator has to walk about it takes a lot of practice to avoid the little bounces from each step as they're just not heavy enough, hence the Steadicam arm+vest system. Depends on budget but rather than dry-hiring a gimbal it might be safer to hire a local operator who owns one, even if they fly your camera.

20
Raw Video / Re: qDslrDashboard on iPad
« on: December 02, 2014, 04:31:09 PM »
The USB video feed is disabled whenever the RAW or MLV modules are active. Wireless monitoring is possible using an HDMI radio sender, but it's not cheap and the extra CPU load from HDMI mirroring sometimes messes up the recordings.

21
Raw Video / Re: Raw Recording using an Android Tablet
« on: November 30, 2014, 10:32:15 AM »
There's no USB feed when the raw modules are enabled. Also discussed here: http://www.magiclantern.fm/forum/index.php?topic=8081.0

22
You presumably don't want to have all 9TB online at the same time (given that would require more than 40TB of array space with spinning disks).

Your goal is to get the maximum throughput to your CPU, so there are a bunch of options:

- Internal SATA3-SAS RAID controller card with a bunch of drives plugged in - cheapest and fastest but you need a big tower case to fit it all in. Cards run around $50 to $200 depending on spec.
- eSATA connection to an external RAID box - fastest generally-available external option for Windows as you can't use Thunderbolt. Most new motherboards have an eSATA port, or buy a card for $20.
- USB3 connection to an external RAID box - headline speeds sound OK but you'll rarely get them, this is really a last-resort.

RAID boxes for studio work aren't cheap, in the way a Lear jet isn't cheap, but if you build your own from a bunch of bare drives and wiring it can be done for decent money. To minimize cost, work out the least amount of online footage you need at a time - for example if you can make do with 2TB online, a 4-way RAID with an internal SATA3-SAS card would be around $600 using 2TB hybrid drives.

As a comparison, here's a commercial unit rated for DPX sequences (same channel throughput as for HD cDNG): http://www.g-technology.eu/products/g-speed-es-pro.php

23
Share Your Videos / Re: Neat Video tests on RAW footage
« on: October 25, 2014, 12:21:18 AM »
Remember that with raw footage, or anything captured as a pure frame sequence, the requirement for noise removal is very different than for a video codec like H.264. The thermal noise on each DNG is random (so much so, sampling shot noise is one of the best ways to create random numbers in crypotgraphy), hence it's much faster and simpler to reduce it per-frame using ACR etc. as if you're cleaning up a still image. Neat Video has some extremely good algorithms but they are expecting a temporal relationship to the noise signal caused by frame interpolation in the codec, which is why it's such a CPU-intensive plugin. It'll work on raw sequences but it's overkill.

24
Quote
... but transfer them to our 1TB Rugged drives for editing.

Trying to cut footage on an external drive is the worst idea. Premiere Pro will happily work with extremely high-density native footage but it *must* be able to read the file data fast enough, and with cDNG that means hundreds of MB/sec sustained throughput across the entire system.  Studio-level work with Pr is done on purpose-built workstations with an internal or bus-linked RAID unit (minimum 6X for HDD, 4X for SSD). Relying on a single i7 is also introducing a bottleneck; pretty much everyone in HW runs Xeons. Even though they're older technology, Xeons were designed specifically for pushing mountains of data around. A lot of people go with HP's Z820 series if they don't want to build something in-house, and there are places such as LightIron who'll rent DI/PP services for a feature. It's not cheap - a decent 6K-capable workstation is around $5000 plus storage - but nobody made a feature film with pocket lint.

See http://adobe.ly/1t6gp5O for an in-depth explanation of what matters (and what doesn't) when choosing a workstation for Pr+Sg.

25
General Chat / Re: Let's request Camera RAW as "effect" for Premiere CC
« on: October 15, 2014, 02:22:14 PM »
The thread you link to is somewhat at crossed purposes.

  • Camera RAW as a Source Settings filter in Pr is not going to happen.
  • A conventional timeline effect that pulls together the existing sliders for exposure, shadows, etc. from things like the Fast Color Corrector is requestable.

Pages: [1] 2 3