Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - DFM

#26
It's true that having more bits gives you more wiggle room, but realistically it's very unusual to push footage far enough to notice the difference between 10b and 14b if it's being delivered online. If you really need every last atom of data (starlight filming, B-cam footage for Avatar 2, etc) then stick to cDNG proxies and pay the price in time and effort. Remember that if you're transcoding in AE there's nothing to stop you from making quick and dirty corrections to WB and exposure in there, so your 10-bit mezz files only need gentle tweaking in Pr/Sg.

I know I've said this before, but there is no plan for Mercury Playback Engine to support the flavors of cDNG currently being created by the various translator apps. If ML files are created that match commercial versions (BMCC, etc) bit-for-bit, or a major hardware manufacturer starts saving cDNGs that coincidentally match the stuff from ML, then you'll get fluid real-time playback in Premiere CC. Until then you have to jump through some hoops. I shoot MLV all the time but we have to be honest - Adobe has a lot to lose by devoting time and money into supporting an unofficial "hack". If it happens by accident because of support for something else, s'all good.
#27
Unless you're doing very extreme grades, for 90% of projects you'll get visually the same quality by rendering your cDNGs into a 'lossless' mezzanine format (DNxHD 10-bit, Edius, ProRes HQ) then using those in Pr as online masters. No need to go back to the DNG frames at the end, especially if you're exporting to an 8-bit codec.
#28
Unless you use a third-party plugin, Adobe software does not support .RAW or .MLV files.

You can import all types of DNG frame sequence (stills and CinemaDNG) via the Adobe Camera Raw interface in After Effects CC. There's no need to process the DNGs in any particular way, and no issues with strange-colored highlights, but it's not a real-time editor. The workflow I'd advise for fluid editing in Premiere Pro would be to import to AE, render a visually-lossless mezzanine format such as DNxHD or ProRes HQ, and import that into Pr. You don't need to use proxies as the rendered footage is perfectly adequate, and it's MPE-compatible for real time playback.

In Premiere Pro CC we have native support for a couple of camera-specific flavors of CinemaDNG. "Still image" DNG sequences will not import at all, so you must convert your MLV/RAW files with one of the transcoding apps that makes cDNGs (such as Chmee's raw2cdng app). Because right now Adobe only guarantee ingestion of footage from the Blackmagic range, you will see channel range issues with pretty much anything else (pink highlights, green color casts, etc.) so you need to ensure the transcoder makes cDNGs with the same bit-depth and channel ranges as a BM body, or do a bunch of refactoring in Speedgrade. Developers should concentrate on making perfect 'Blackmagic flavor' cDNGs rather than hoping PrCC will support the stuff they're currently exporting. Of course there's nothing to stop developers creating a synthetic importer that loads MLV or RAW files into Premiere's frame buffer on demand (the SDKs are freely available) but whether there's a commercial justification for one is another matter entirely.

There will be more camera bodies supported by PrCC in the future but if you're holding out hope of seeing "Magic Lantern" on the official list of import formats for any Adobe application, you're going to be disappointed. Adobe has close relationships with camera manufacturers - if MLV or RAW were ISO standards then software developers could add support without annoying their industry partners; but that's not going to happen.
#29
Lens correction is a geometrical transform based entirely on fixed profile data - it is not adaptive.
#30
Quote from: Spooke on August 17, 2014, 05:23:00 PM
Could anyone confirm whether this card will work...

http://www.magiclantern.fm/forum/index.php?topic=12630.0
http://www.magiclantern.fm/forum/index.php?topic=11428.0

Short answer - in terms of speed, if you get a good copy it's OK. In terms of reliability, the 64gb version is much better. Unless a larger card is essential for longer recordings, a bunch of 64s is safer as if one dies you aren't totally USC. Eggs - basket - etc.

I'm running 64gb 1066x cards on 7D bodies without any problem, but realistically since the CF bus in the 7D isn't too hot, they benchmark the same as the 1000x.
#31
Quote from: Walter Schulz on August 15, 2014, 10:54:54 PM
Thanks!
Is there a method to compare our LR settings?

In theory it shouldn't matter; nothing in the application or cat prefs changes the code used to update the IPTC/EXIF block. I'll run some more tests and try to make it break.
#32
Quote from: Walter Schulz on August 15, 2014, 05:40:28 PM
Please verify or falsify this one.

I followed your steps:

  • Import DNG to LR5.6 using 'Add'
  • Add another keyword
  • Photo > Update DNG Preview & Metadata

No reduction in file size on disc. Screenshots at http://i.imgur.com/DgYF07F.gif
#33
Quote from: Dante on August 11, 2014, 05:46:32 AM
Yes it seems through testing that LR is in fact re-saving/exporting the DNG again when "Update DNG preview and metadata, with the lossless option.
Although I have found no documentation from Adobe saying that's whats happening and there is nowhere to change the compression option for the output.

Lightroom is non-destructive. You don't ever "re-save" a file, you export a new copy. Of course you can choose to write that over the original, but it's still a different concept to the way something like Photoshop does it.

Lightroom's default for exporting DNGs is lossless compression. You can choose DNG lossy compression via the Export dialog. The compression ratio is roughly equivalent to the 100% option for JPEGs. You cannot change the compression ratio in Lightoom or DNG Converter.

Adobe DNG Converter can save to all four DNG formats, depending on the commandline parameter you set:

-c = compressed (lossless)
-u = uncompressed
-lossy = lossy compression
-l = linear DNGs (stores debayered pixels, usually RGB channels).

when you use -lossy, you can also specify either:

-side PPP = maximum pixel width or height on the long edge
-count PPP = desired kilopixel count  (1024 = 1 megapixel, etc.)

e.g.  -lossy -count 4096 will save a lossy DNG with 4 megapixels resolution at the original aspect ratio.
#34
Faking the timestamps on anything is a bad idea - the #1 rule of any multi-device/MOS shoot is for everything to agree exactly about what time it is. It might sound good now, but in a year when you're trying to track down some old footage it really won't help if it claims to be from a totally different event. The closer you can get to time lock, the easier it is to edit.

Label all your cards so they stay with their intended device, and put a little text file on the card ("CAM1.txt", etc) so when you dump it to your working storage you can tell which folder came from where. Filenames don't matter if they're in logically-named folders (ask anyone who shoots MXF). Besides, if you're talking about a multicam shoot, once they're all in your editor you know instantly which is which from the angle of view. For an A/B shoot you would slate the cameras differently.
#35
The OSIRIS LUTs are film emulations, so if you want your footage to retain that emulation they should go last in the stack. You may want to break the emulation for artistic reasons, but grading afterwards means your "Prismo" footage isn't Prismo-colored anymore.

In terms of color correction inside Ae, you have a bunch of bundled low-level effects (curves, tint, levels, etc.) but the most powerful Ae-specific tool is the included copy of Synthetic Aperture Color Finesse LE (see tutorial at http://tv.adobe.com/watch/short-and-suite/color-finesse-workflow-primary-grading-part-1/ ). You can also get third party add-ons like Colorista from Magic Bullet. Which you prefer is a matter of taste and familiarity for most people - they all do the same fundamental things.

If you have a CC subscription then you also have Speedgrade - which is now Adobe's primary application for digital cinema color correction and grading. It's more powerful than the Ae plugins, but has a significant learning curve.
#36
Don't mess with anything in the Basic panel aside from Exposure. The other sliders are adaptive in PV12 so their effect varies depending on the tonality of the image. Moving 'highlights' or 'blacks' on frames within a cDNG clip will cause flicker as each one will be adjusted by a different amount. Grade after import into your video editing package, using curves.

Quote from: elkanah77 on July 31, 2014, 09:14:09 PM
Hi,
New member here.
Just wondering about raising the shadows and lowering the highlights in ACR before exporting. I do that when retouching stills, but do the VisionLog profile in ACR embedd the DR so it's best to leave these at 0 before exporting the sequence? It would seem to me that raising the shadows a bit and lowering the highlights bring back details as with stills but also increase noise. Better to take care of shadows/highlights later? Cheers.
#37
Quote from: mucher on July 30, 2014, 07:06:26 PM
I remember that g3gg0 once mentioned that you can import all the dng/cdng sequences into AE, then save AE project, then import the AE project into Premiere. It works for me, thanks, but I am not much a user.

Yes you can (we call that Direct Link) but you won't get responsive playback in Pr, as you're still debayering frames as you go.
#38
That's a subjective question - both ACR and Neat do perfectly good noise reduction, and for less-than-extreme cases I don't think anyone would notice the difference. Which you prefer for really pushing things is a matter of taste.

The most important thing when really hammering out high noise levels is to put some grain back afterwards, otherwise it looks fake. Don't use ACR's "grain" controls though, as they apply the same pattern to every frame. You need to use an effect in AE or Pr to do it, either synthetic gaussian noise or a stock clip of grain.
#39
In terms of cost-vs-speed, Komputerbay 64gb cards remain the best value - the 1000x or 1066x are both fast enough for the 7D (remember it's an old body, it can't take full advantage of the latest super-high-speed cards) and since there's barely any difference in price between them, go for the 1066x. They're listed on Amazon but make sure you buy from the Komputerbay seller account so you're getting genuine stock. Steer clear of the larger capacities, they tend to be slower. The problem with Komputerbay cards is quality control - they're cheap precisely because they're not as tightly-selected in factory QC, so you need to run an in-camera speed test as soon as it arrives (using ML), and send it back if it's way below spec. Komputerbay are fine about replacing slow cards but they're based in the USA, so elsewhere you just use Amazon's returns policy. I know people who will buy a few, find the fastest and send the others back  8).

If you want reliability and speed, you have no choice but to pay for it - the Lexar 1066x cards for example are 4X times the price, but it's very unlikely you'd get a dud.

Quote from: sulky on July 30, 2014, 03:50:26 AM
Hello! Can anybody please tell me which is the best on budget CF card to record RAW on 7d?
#40
Yes, that's correct. By compressing to Quicktime with one of those codecs the file is much easier to play back and scrub through. Because it compresses to 10-bit you'll find it helps to use the log conversion profile - for basic grading you probably wouldn't notice, but extreme changes in Premiere would pull out some banding in the whites and blacks if you kept everything in linear space.

Quote from: Dani on July 29, 2014, 06:27:01 PM
So if i import my dng files into after effects, then change setting to 16 bit and then export the MOV i will have a file similiar to the dng footage, and in premiere i will also do some color correction having the same flexibility as the dng footage?
#41
12-bit files would work in theory, if they were created properly - but the current batch of tools leaves them with broken highlights. Only the 16-bit versions from Chmee's app will import correctly into Premiere Pro CC. Scrubbing a 12-bit cDNG sequence isn't any significant amount less work for the CPU than a 16-bit sequence, and the bottlenecks are usually in disc read speeds.

CinemaDNG support for ML files was not an intended feature in Premiere Pro (or in any of CC) - it's an accidental by-product of Adobe's support for Blackmagic footage, so files from those cameras are the only types with any guarantee of correct debayering.


Right now, if you want fast scrubbing in Pr and you have the full CC subscription, my suggestion is to encode log mezzanines using After Effects (which will read CinemaDNG or regular "still" DNG sequences just the same). These aren't "proxies" as you will use them for output, but they play in real-time even on a laptop.


  • Create the DNGs through one of the tools listed on these forums
  • Import to After Effects by selecting the first frame and checking the "import as footage" option
  • Change the comp to 16-bit (Alt/Opt click on the bit depth icon in the Project panel)
  • Use the 'interpret footage > advanced' option to open the DNG sequence in Camera Raw
  • You can apply noise reduction, lens corrections and change exposure, but do NOT touch the other sliders (blacks, whites, contrast,etc).
  • Recommended: Apply a LOG camera profile (CineLog or VisionLog) to make the footage "flat"
  • Export the comp to a Quicktime-wrapped mezzanine codec such as Cineform, DNxHD 10-bit or Grass Valley - which you use depends on your operating system. Don't export to AVI as they are always 8-bit.
  • Import the MOV files into Premiere Pro - visually they'll be 95% the same as the cDNG footage, so they're fine for most applications. If you really need pixel-perfect output from the cDNG frames you can offline and relink the clips before export (but in that case it'd be simpler to use Premiere Pro's internal proxy system directly on the cDNG footage).
  • If you applied a log profile in ACR, to get the linear footage back just apply a Lumetri effect and use the Log>Rec709 LUT provided by the people who made the camera profile; or grade directly in Speedgrade (technically Sg expects a different curve to CineLog/VisionLog, but for most people the visual discrepancy is too small to care about)
#42
Quote from: herodotus on July 26, 2014, 04:33:49 PM
Ah, thank you. DFM, do you mean the scaling that takes place in FCP?

Indeed. You could upscale the original files, or just scale the existing clip inside FCP - my point is you'll find it very hard to notice any quality difference in this case, but doing it in FCP via the Transform tool is a 5-second job.
#43
If you had the folder of DNG files you could upscale them in Lightroom's Export dialog and re-render the ProRes; but I don't expect you'd be able to see any difference between that and simply scaling the existing 1728-pixel ProRes footage (other than the hours passing by as you wait for the thing to render).

Doing inter-pixel resizes (>200%) you start to care about the algorithm, but you're only going up by 19%.
#44
ML loads each time you turn on the camera - as in the code is run, and the extra functions are all defined into memory. Pressing the DELETE button simply pops open the menu. It's ML's equivalent of pressing the MENU button (that still opens Canon's menu as it always did). To put it bluntly, the DELETE button was the only one that isn't used in record mode and is present on every camera, so it was the logical choice when ML needed its own menu button.

Quote from: C7D203 on July 12, 2014, 05:35:17 PM
If I need to press the trash to load it, why do I need to press set to NOT load it? Can't I just not press anything not to load it?

As to your other questions:

Quote
I need to shoot myself for youtube videos but part of what I'll use the 7D is "an extra camera" to my brother's 5D Mii when we do serious documentary work. Will probably use the 7D for close-ups from a different angle, etc.
If the main camera is shooting raw video, it would make sense to do the same just so the footage can be edited through the same workflow - but it's entirely down to what your brother's doing. Irrespective of what type of video is being shot, ML does offer a bunch of useful features - such as zebras and focus peaking - which are present on true video cameras but not on the 7D. They make it a lot easier to get your exposure and focus points right but don't affect the recording at all. Before ML came along, pro operators using a DSLR had to plug in an expensive field monitor that had those tools, in effect all ML has done is move them onto the LCD panel on your camera.

Quote
I was wondering if ML allows for better ISO performances because as much as I love my 7D and as much as it is a huge upgrade to my previous 20D (!!) I'm not impressed by it's high ISO shooting at all. Anything above 800 is meh at best.

ML does nothing to reduce high-iso noise or make the sensor more sensitive - it cannot make your 7D perform like an A7S, that's a hardware problem. What it can do is assist in getting the perfect exposure (via ETTR) so the noise is minimized, and help you to see what you're shooting by changing the LCD view. Fundamentally you're still using a 7D, which was designed for sports. If you're shooting black cats in a coal mine, you need another camera.
#45
Quote from: C7D203 on July 08, 2014, 09:53:42 AM
... it does appear that with (or maybe not even) an anti-aliasing filter and with ML you get high quality HD video with further control.

Yes, sortof!

With ML installed the 7D can shoot H.264 and raw video in two modes - regular and crop. Regular video uses the full area of the sensor but only samples some of the pixels (called line-skipping) - this is what introduces the nasty moire/aliasing. In crop mode, the camera only uses the center portion of the sensor and samples every pixel, so the skipping problem goes away (in theory it's as good as a still photo). You cannot shoot full-HD (1920x1080) RAW in non-crop mode, it's a limit of the way the sensor is read, so if you're after 1080p footage you either have to shoot in crop mode (which creates a very narrow field of view, similar to using the "5x" zoom button in Live View), or stick with H.264. Until very recently we had no fps control for crop mode raw recording, so the data rates being sent to the card were so high it was impossible to shoot full-resolution for more than a couple of seconds. Even now, you need the fastest possible cards (1000x or above) to record at the larger sizes. You're OK with a 64G card on the 7D, but a 533x speed rating will limit what you can shoot in raw video. These days the 'workhorse' card is Komputerbay's 64GB 1000x.

Your mention of 'further control' is a little different for raw video - yes, you do get to change your white balance and push the exposure much more in post-production, and as it's basically a series of still photos the quality is far higher, but the price you pay is needing to do that post-production; which involves extracting the footage, color grading in Resolve or Speedgrade, then rendering out a new file. Unprocessed, the raw file is flat and often has messed-up black and white values, plus it'll be a long while before you can upload an MLV file to YouTube! If you've never done grading before, that's a big leap from "copy from card and press upload". Raw video is rightly something that we're all passionate about, but some days you're just hoofing a clip of your cat onto Facebook, and plain old H.264 recording is the way to go. ML doesn't stop you from doing that.

ML's very stable in photo mode these days - occasionally you can crash the firmware by pushing some of the obscure features too far (like focus stacking with motor speeds too high) but you just pull the battery and restart. If you have a fast card the raw video recording is also pretty stable, nobody's blown up their 7D yet, but you're at the limit of what's possible so even things like attaching an HDMI field monitor can mess up the recordings, so you need to play with it and learn what works with your methods - don't install it the day before a commercial job! Personally I think raw video is good enough for B-cam work and personal projects, I don't know of anyone who'd rely on it to A-cam a one-off event like a wedding; but then the same was true in the early days of RED.

Adding a VAF filter from Mosaic will reduce the aliasing in regular mode only - it has no benefit in crop mode, if anything it'll make the image a bit worse. It's also designed for dedicated video shooting, you remove it to take stills as the filter wedges the mirror up, making your viewfinder useless (it's possible to shoot stills via Live View, but that's a real pain and there will be a slight blurring of the image). The filter will work for both raw and H.264 footage, indeed with the huge increase in fidelity you get from raw files, having a filter is arguably much more important. I've been using one since they launched, but if you're doing mixed-media location work you need a second body for stills. Anything that ships with a set of tweezers isn't going to be installed in a field!

ML has made a huge difference to the 7D's video performance, and while it doesn't alter still image files the focus/exposure/automation tools are a massive help to folks shooting macro, timelapse, etc. The thing to remember is you don't have to use it even if it's installed - ML doesn't remove any of the 7D's factory features, it simply adds more.
#46
@reddeercity:

1) There's no simple route within Premiere Pro CC 2014 to use the ACR grading engine, it's been entirely replaced by Speedgrade and Dynamic Link. Note the "source settings" panel displayed by PrCC2014 when you bring in CinemaDNG footage isn't intended for grading, it's just a way to match the hardware so it looks vaguely "normal" on the source monitor. All the heavy-lifting to color-correct and grade the footage happens afterwards, ideally within Sg (or by creating a Lumetri LUT in Sg and applying it in Pr). If you want to use ACR to adjust your cDNGs, you have to use After Effects (in response to steffenhaldrup you can use Dynamic Link or render to a mezzanine format, it's up to you - Dynamic Link won't give you MPE-enabled playback in Premiere as you're still pulling single frames, but you benefit from real-time updates when you tweak something in AE.

2) You can absolutely use a second display for monitoring - either a second computer monitor or a hardware output device feeding a reference TV. Just open Premiere's Preferences > Playback and tick the box you want under "Video Device". You even have offset fields to account for any loop delay between the audio and video hardware. Be careful when using computer displays fed from a consumer graphics card as not all of them allow individual calibration of each display, so people tend to require physically different cards or a multi-head pro series card. For example the video-out port on a laptop is rarely able to hold a separate calibration from the laptop's own panel.

The change to a Pr<>Sg pipeline takes a bit of mental adjustment; I agree that for our specific workflow it doesn't help much, but ML cDNGs are very much an edge case. The pipelines for CC 2014 are designed with digital cinema projects in mind, where Sg is an established tool - usually in the hands of a dedicated colorist - and the idea of adjusting 'video' footage in Camera Raw is very alien. The new workflow is optimized for true "cinema" cameras (BlackMagic etc), so while the Sg team admitted support for ML-5DIII in their blog it's not an advertized feature, and it's entirely a by-product of support for other hardware. Some flavors work, some don't, but to put it bluntly if you can make your ML files exactly the same internal format to BMCC's footage, it will work perfectly. So far you're close, but not close enough.

Yes, the old ACR > Pr workflow was more flexible, as you weren't limited in what you could feed the frame server - but it caused no end of hassle for pros as it put a bunch of important grading decisions at the front of the pipe. That's not how studios work.
#47
martin_a is correct, in Premiere Pro CC 2014 the intended workflow for all 'raw' footage is to import CinemaDNG then grade in Speedgrade - not in ACR. This is the only way to deliver realtime hardware-accelerated playback - the rendering engines in Sg and Pr are Mercury-enabled whereas the ACR "develop" module route is an old-school frame server. Because you can now pass a project back and forth between Sg and Pr it doesn't matter which one you start in.

If you really want to stay with ACR then pipe your footage through an After Effects comp - but of course you'll lose MPE playback.
#48
General Chat / Re: Premiere 7.1 Pink Issue
April 28, 2014, 10:56:34 PM
Quote from: chmee on April 23, 2014, 02:17:24 PM
the big update from adobe should do the rest :)

Not necessarily. CDNG support is only designed and tested for commercial implementations (BMCC, etc) so whether the next release version will handle ML files from specific cameras/converters is just a matter of luck. If a converter creates DNGs that report identically to BMCC, you'll be fine. If not, you may still see the pink highlights.
#49
Thanks for the crosslink chmee.

To confirm - in the forthcoming builds of Premiere Pro CC / Media Encoder CC / Speedgrade CC the Mercury Playback Engine will support some (but not all) flavors of CinemaDNG footage created by ML tools, such as chmee's. ML cDNG import is a side effect of support for other cameras and Adobe aren't making a headline of it for obvious reasons, but it'll eliminate the need to bounce through AE or use the GingerHDR plugins. Instead of importing through Camera Raw, you will get a Lumetri quick-adjust dialog similar to the one for ArriRAW. Playback will be fully accelerated, I can scrub through 1080p footage on my relatively feeble laptop and play back in real time with a bunch of grades and effects slapped on. Disk read speed will be your only practical limit to playback smoothness.

Yes it'd be nice to import RAW or MLV files natively, but we have to be realistic - Adobe has close ties to the major camera manufacturers, and can't be seen to be actively targeting a 'hacked' format however good it is.
#50
With the forthcoming versions, 'party mode' isn't required anymore. The screenshot I posted earlier isn't of a 'party mode' file.