Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - DFM

#51
Sg in the current version of CC expects slightly different things in footage; DNGs in the other apps previously went through Camera Raw but in the forthcoming version they import natively, and you don't get the ACR dialog if you open the cDNG footage's "source settings" - instead there's a Lumetri quick-adjust tool. That will impact on workflows that use the ACR engine to do heavy lifting (lens corrections, noise reduction, etc).

Here's the dialog being fed a 16-bit linear cDNG from chmee's app:

#52
... by the way, if folks want me to run a test on a particular 'flavor' of cDNGs, upload them someplace and ping me a link. Don't need more than a second's worth of frames.
#53
Quote from: chmee on April 03, 2014, 10:48:08 AM
by the way. we'll see what advantages the next CC-Version will bring (NAB2014) - maybe the whole work is not needed anymore :)
http://www.cinema5d.com/news/?p=24528

The forthcoming release of Premiere Pro CC and Media Encoder CC will support *some* types of ML-generated CinemaDNG footage but not all, and they have to be real cDNG frames not just a folder of 'stills'. I can't give a detailed matrix yet as the code is still in flux, for example the build I'm testing now has problems with stuff from the 7D and won't read 14-bit files.
#54
Tragic Lantern / Re: 7D Raw Thread
December 02, 2013, 01:49:00 PM
Quote from: hdclip on November 30, 2013, 04:54:16 PM
Hello!!!!I would like to know, wich XRITE do you use to create color profile???thank's!!!

X-Rite's ColorChecker products come with their own profiling software. Normally people use it as a Lr plugin but it runs in standalone mode, you feed it any untouched DNG that shows the card and it spits out an ICC profile; which you can then apply to the ML DNG sequence through the Camera Calibration panel in ACR.
#55
Tragic Lantern / Re: 7D Raw Thread
November 30, 2013, 11:23:28 AM
Quote from: jman on November 29, 2013, 11:45:58 PM
I have had no problems using an H4N and syncing to the camera beep in post. ML beep is fine to sync to.

Provided it's audible, i.e. the camera is in range, and you're only using one camera.

Slating each scene is much easier and can lock any number of cameras to any number of audio tracks. It has a bunch of other advantages that matter for RAW shooting (shot info on screen, whibal targets, etc.). Stick a ColorChecker card to the slate and you have everything you need to match footage from different cameras, as you can dump that DNG frame into XRite's software and create a color profile that normalizes each clip.
#56
Tragic Lantern / Re: 7D Raw Thread
November 27, 2013, 11:18:05 AM
Sharpening and noise reduction in ACR applies per-image, so it depends how aggressive you are. Toning down luma noise from high ISO footage is generally OK, but if you smooth or sharpen the image too far the affected areas can be different enough between frames (at the pixel level) to introduce flicker. Some of the native video tools for sharpening and NR will peek at adjacent frames and make sure the pixel changes are gradual.

It's similar to the exposure/clarity question - in theory when you're shooting the real world, two frames should have extremely-similar contents (just with the pixels shifted a bit). Unless there's a speedlite going off you shouldn't see flickering in the footage even though the exposure slider is frame-based, as "90% of white" in frame 999 should be visually the same as in frame 1000. Push it too far and you do.

I wouldn't have any problem taking a high-ISO ML DNG sequence back to a "normal" noise/sharpness level with the ACR sliders,.
#57
Tragic Lantern / Re: 7D Raw Thread
November 27, 2013, 05:53:28 AM
Quote from: dmilligan on November 26, 2013, 02:12:56 PM
I think the reason that the highlight slider is "so good" is also what is causing the flicker, at least to some extent. It's causing the flicker because it's 'image adaptive'. It analyzes the image and determines the best curve on the fly based on image content. This is what makes it work so well, it's also what makes it flicker, each frame is slightly different.

There are however, other controls that Adobe has made 'image adaptive' that don't neccessarily need to be, and it would be nice if Adobe would make them 'temporally aware' or something like that. It's my understanding that fewer of the 'Process 2010' controls are image adapative, so a lot of folks use 2010 mode in ACR for RAW video.

All the sliders in the Basic panel in PV2012 are image-adaptive, and things like Clarity always have to be. 'Exposure' and 'Brightness' in PV2010 were just offsets (the latter having a rolloff). Yes this does cause a problem when the input source is a sequence, and the need to tweak ACR to be more 'video-friendly' is one reason why it didn't make it into the 7.1 release. I can't comment on future updates though. Realistically with a DNG > ACR > After Effects workflow, you only use ACR for the factory corrections (white balance/tint and lens distortion, maybe just a tad of noise filtering but not too much) and all the exposure/contrast stuff is then applied with curves effects in AE. If you're working on a 32-bit comp there's no difference in final quality.
#58
Tragic Lantern / Re: 7D Raw Thread
November 21, 2013, 09:45:11 AM
Sorry, but due to time pressures only Blackmagic CinemaDNG files are supported in the 7.1 release. Other flavors will follow soon.

Quote from: Fringuello on November 19, 2013, 06:26:27 PM
I have problems with premiere pro 7.1 cc should open the suquenza dng, but it seems that dng created by magic lantern alter the metadata?
someone has the same problem?
http://www.magiclantern.fm/forum/index.php?topic=9345.msg89144#new
#59
Tragic Lantern / Re: 7D Raw Thread
November 04, 2013, 02:03:26 PM
No blurring in the actual recording. I'm using RAW instead of MLV for the same reason.

Quote from: arrinkiiii on November 04, 2013, 12:34:07 PM
-Went to recorded some footage and notice this: Since MLV give's me pink/corrupt frames i use now the raw_mo... What i notice is wend you hit the recorded button the Lv make the image blurred... not yet see if this is recorded to the file or is just in the Lv.
#60
Tragic Lantern / Re: 7D Raw Thread
November 04, 2013, 02:00:53 PM
You would do well to address that remark to the person who started the conversation. My job is to direct customers to the correct place for Adobe product support, which isn't on this site.

Quote from: RenatoPhoto on November 03, 2013, 06:19:56 PM
Please stay on topic, this is 7D topic.  All post processing post do not belong here!

http://www.magiclantern.fm/forum/index.php?board=54.0
#61
Tragic Lantern / Re: 7D Raw Thread
November 02, 2013, 10:10:04 PM
Quote from: mrnv45 on November 02, 2013, 01:50:06 AM
im having issues with ACR pulling up in After Effects...

Camera Raw doesn't have a separate installer, so it doesn't have a separate activation system. It requires that a parent product be activated first (e.g. for Camera Raw CC it would be Photoshop CC or After Effects CC). I'm not sure of your setup as you previously referred to CC and CS6, and if there are mixtures of versions on the same machine it can confuse the license service. Please post your question to the Adobe product forums and we'll help resolve the problem over there.
#62
Tragic Lantern / Re: 7D Raw Thread
November 02, 2013, 10:04:36 PM
Quote from: danistuta on November 02, 2013, 08:03:31 PM
New Premiere update 7.1, says that support DNG files. I tried putting DNG files converted from mlv2dng, but it doesn't seem to work. What's the right way to do it?

The DNGs aren't structured correctly so they won't import.
#63
Tragic Lantern / Re: 7D Raw Thread
November 01, 2013, 11:40:04 AM
First question - Yes and no. Regular '1x' video records across the entire sensor width, so to get video resolutions it skips lines and pixels, as if you were putting a mesh grid in front of the sensor. Shooting in '5x' crop mode you are sampling without line-skipping, as if you were shooting a regular photo. The recorded area in crop mode is a small region from the center of the CCD, hence the magnification. There's no loss of image quality, indeed by avoiding line-skipping you cure the alias/moire problem. The drawback is the need for UWA lenses to give some manageable field of view.

Second question - Correct; it's just altering the region being recorded. No pixel aspect ratio adjustments going on. Yes you would ideally grab the max resolution and crop in post, but for most people that's not going to be practicable. Even if card speeds are up to the job, you start needing another backpack to carry them in.


Quote from: dmk on November 01, 2013, 10:31:59 AM
So is this correct- if remaining at the same resolution and zooming in, it's a digital zoom (i.e. lower quality), but if zooming in and increasing resolution, it's a crop (i.e. same or higher quality)? If that's right, is there a simple equation to figure out where that balances out to avoid digital-zoom artifacts?

Also, is this correct- aspect ratio is simply an in-camera crop of the image data, mostly useful to allow ML to keep the desired framerate. i.e. it's not doing any anamorphic squeezing or anything like that, ideal would be to keep aspect ratio at 1:1 and record at full sensor resolution and crop in post if desired for aesthetic reasons (obviously that's not achievable right now, just trying to understand what Aspect ratio here means exactly)?
#64
Tragic Lantern / Re: 7D Raw Thread
November 01, 2013, 11:29:07 AM
Quote from: arrinkiiii on November 01, 2013, 10:20:10 AM
-Im still having pink/corrupted frames with the latest MLV, even the LV in MLV have some delay, glitch... with raw legacy everything go's good.

Same here. Not seen a corrupted frame in RAW for weeks, but everything I've shot in MLV is littered with them (it's not a buffer/card issue, I get bad frames at 1280x720 from MLVs but the same card will write continuous RAW files at full non-crop resolution and they're perfect).

It's annoying because the 'canikon' headers from RAW2DNG aren't correct, so the frame sequence doesn't comply with the rather strict requirements for CinemaDNG.
#65
Tragic Lantern / Re: 7D Raw Thread
October 31, 2013, 11:02:01 AM
Straying off topic but sorry, but I would really advise against that workflow.

Lightroom was never designed to process CinemaDNG sequences. The database engine hasn't been optimized for tens of thousands of frames in a temporary transcoding pipeline. It assumes you're permanently importing photographs into the catalog (so all that waiting around is while it writes hundreds of megabytes of SQL and preview data to disk that you'll never need again), and while the developments in Lr are nondestructive, the exports aren't. The point of using raw footage is to retain it for as long into the pipeline as possible, so for example you can change the white balance long after you've started cutting the dailies.

After Effects includes exactly the same Camera RAW engine as is used in Lr, but in AE it's been integrated specifically to support frame sequences. No temporary files are created, your developments for frame 1 are automatically applied to all the others, and it's completely nondestructive. ACR supports user presets, lens corrections, etc. and the only things missing in the ACR window compared to the Lr UI are tools which would never be used on frame sequences (such as healing brushes).

Bouncing through a Quicktime DI is also a very bad idea, as the QT engine isn't optimized for 64-bit platforms (unlike Ae and ACR). The entire point behind the DirectLink pipeline between AE and Pr is to avoid any need for intermediates.

The 'correct' workflow at the moment, from an Adobe point of view, is to import the DNG sequences directly into After Effects on a 16 or 32-bit comp, apply import corrections (lens adjustments, whibal and detail) in ACR then grade the footage on the timeline. If you want to cut in Premiere Pro, simply drop the AE comps into a Pr sequence. Everything remains 100% nondestructive and x64-compatible.

All this will be moot when the update to the CC video tools collection are rolled out to the public, as you'll then be able to load cDNG sequences into Premiere Pro and Speedgrade. They've been delayed while engineering perform some additional tests, but they're due shortly.

~D~


Quote from: mrnv45 on October 31, 2013, 06:43:32 AM
nice workflow

Lightroom
Quicktime Pro (if you know how to get it  ;) )
After Effects

really simple and cool...
#66
Quote from: painya on October 05, 2013, 10:58:43 PM
Hmnnn, I wonder if the update is limited to just Cinema DNG's, or would include DNG's.

CinemaDNG is a definition of how to wrap a series of still frames into a container that can be read as a video clip - it's not a codec. Each frame can be a DNG file or a TIFF (though the latter is vanishingly rare). The 'wrapper' can either be an MXF structure or just a plain old folder on disc with sequentially-named files in it. The DNGs used for each frame are no different to the ones you'd get from still photography, and Premiere's Mercury Playback Engine doesn't care if you're opening one image file as a still, or a sequence of them as a clip.
#67
When I'm shooting in the back of beyond I have a similar setup to Andreas but I'm battery-powered:

Startech USB3 hub
Lexar USB3 CF card reader
Toshiba stor-e USB3 1.5TB drive
All the above run from a PowerGen external battery pack and live in a Peli 1120 case with room for a bunch of spare LP-E6s.

If I'm doing backups (copies) an ICS cellphone will suffice as the controller, but before I'd risk wiping a card I want bit-level-verified duplication. For that I use the same kit but with a Surface Pro tablet. TeraCopy software and AdobeCC installed so I can process and view footage on location (not about to edit dailies on top of a mountain but I can pull DNGs into Lr or unwrap MLVs and check for pink frames etc.). The Surface Pro is hideously uncool but Adobe haven't gotten round to releasing After Effects for iOS yet... ;)

With the Surface I can get 70+MB/s transfer to the HDD off a KB1000 card. Cellphone only has USB2 so it's teeth-gratingly slow in comparison, but a lot cheaper and easier to cart about  :D
#68
Some general responses to this thread  ;):

We get a lot of comments about the UI for SpeedGrade being strange, and I agree that it's very different to anything else in the Creative Suite family. It's a legacy of two things - firstly Adobe have had their hands on the code for only a short time, and so the initial development was all about compatibility - Sg as it looks today is basically the same UI as shipped by IRIDAS. Going forward things are evolving but the DI/CT professionals who use it every day have had many years to learn a bunch of quirky UIs from the vendor of their choice, so a wholesale change to something that behaves like Premiere Pro or Resolve would annoy far more people than it'd please.

Without wishing to be rude to anyone, Sg was designed for a very specific user - a color timer in a motion picture studio - and so the UI is secondary. Colorists use hardware desks so they don't care if the wheels are fiddly to use with a mouse, and neither did IRIDAS. The concept of 'look' layers is also difficult to grasp without some heavy reading of the manuals, but given the target audience do nothing else all day every day, the industry does somewhat prefer things to be obscure (a colorist keeps his or her job until the DP finds out a way to do it themselves!). This is also the reason that Sg as of today will only import digital cinema footage rather than stuff like H.264 and AVCHD. That will change in October but the UI is largely static for the time being.

The solution, as you'll know by now, is to steal the Lumetri Color Engine from inside Sg and plug it into the other Adobe applications. You can already do that now in Premiere Pro CC (applying a "look" file as an effect) and come October you'll be able to apply looks directly from Adobe Media Encoder CC, but there's no escaping the need to jump into Sg at some point if you want to create your own looks. Again in October the link between Pr and Sg will finally connect properly.

Will we reach a point when Sg is as intuitive to use as the consumer products? No; but the biggest quirks will be ironed out. Creative Cloud has changed how Adobe see the application landscape, with truckloads of people getting access to programs that in all fairness are beyond their abilities. Dumbing down these top-end applications isn't an option the professional user community would accept so there will always be a cliff-face learning curve between something like Photoshop Elements and SpeedGrade. The hope is to create workflows that the majority of 'prosumer' users can follow which grab snippets of pre-made functionality without necessarily understanding what's happening.

At the basic level, someone with the classic "make my iPhone video look good" question can pick one of the predefined Look files and apply it without needing to know anything about what's being adjusted. Step one level up from that and you can jump across into Sg and fiddle with those defaults, maybe to widen a split tone or burn down the highlights. You will need to read the help file, but not much of it. A colorist who has to shot-match against Macbeth cards and calibrate Alexa log footage for broadcast will lock herself in a basement for 6 months and learn SpeedGrade, then get paid handsomely for her efforts.

In terms of color depth, the sequences in Premiere Pro are always 32-bit floating point. The default MPE previews aren't (because nobody has a 32-bit monitor) but you can bring in your DNG footage, apply any combination of "/32-ready" effects, and export back out to a lossless format of your choice. A pixel in the input stream will be a pixel in the output stream. In contrast you have to explicitly set the bit depth in AE (because it's far more CPU-intensive to do the comps in /32). The rule in AE for maximum quality is simple - pick a comp depth equal to or higher than the deepest source file you intend to feed it. If the source is a 14-bit ML DNG, there is no significant benefit in going above a /16 comp - it would have a very small effect if you apply some ultra-extreme grades as the interpolation would be narrower, but without the source data in the first place you're not gaining any 'real' pixels. If you're exporting that comp to H.264 for the Web you may as well stick in /8 and tone-map on the way in.


For ML raw video shooters the workflow is absolutely going to improve in October; we're not at the point of supporting MLV as a native file format  ;) but the time it takes to get from a folder of DNGs to a Vimeo-ready file will drop hugely. If you're just transcoding a rush to show someone, you'll be able to do it in AME with full hardware-accelerated rendering. The Mercury Playback Engine in Premiere Pro means that in theory you'll be able to scrub about your CinemaDNG timeline as smoothly as you want; but with all raw footage the bottleneck very firmly arrives at your disk. With a decent GPU and more than 12GB RAM, unless you're serving the footage from an SSD or multi-striped RAID array it will often struggle to read the frames fast enough.

That doesn't mean you need a behemoth of a machine to work effectively, just that you can't expect miracles from a Walmart desktop. I have Premiere Pro CC running on a Microsoft Surface Pro and it limps along OK - nothing I'd want to rely on but as a proof of concept it's as not bad to be transcoding on a 'tablet'. Some of the video pros that post their benchmark figures to Adobe have built things that make my eyes water (I've seen a 20-way RAID cabinet feeding a quad-Xeon board with 4 Tesla cards) but they'll be working on time-critical HD material, such as for broadcast news where a minute of extra rendering means they'd miss air time. The 'average' Premiere Pro CC user hovers just a little above the minimum specs; they tend to have a decent graphics card but their disks are...  :o
#69
Post-processing Workflow / Re: Upscaling Technique
September 14, 2013, 05:58:36 PM
It's part of the October update. Full details are available here.

Quote from: zuzukasuma on September 11, 2013, 04:58:32 PM
I just did, CC hasn't got that module yet.
#71
Unlike with a regular image sequence, a CinemaDNG sequence is "undeveloped" so its properties must still be defined by the ACR engine and you can adjust it however you want. Remember that the timeline in Premiere Pro is always native 32-bit floating point so a luma curve effect in Pr would have the identical result as a tone curve in ACR. Can't show you the workflow yet, it's not finished  ;)

Another thing for RAW shooters on non-5D bodies who need to enlarge their frames to HD is detail-preserving upscale in AE: http://tv.adobe.com/watch/adobe-at-ibc-2013/after-effects-cc-detailpreserving-upscale/
#72
The updates will push to CC subscribers mid-October. I can't give the exact date as it's still in flux.
#73
Hi - been a while since I posted here, but I guess you may be just slightly interested in what's about to happen to Adobe Media Encoder and Premiere Pro.

CinemaDNG has been a little on the back burner at Adobe for a while now, so it's frustrating pulling the DNG frames from ML raw video into some of the applications (After Effects is hardly the best choice for quick transcodes and fast edits!). With the arrival of SpeedGrade there was a new drive to support digital cinema workflows, and today I can finally announce that in the free updates to CC due to drop mid-October, Premiere Pro and Adobe Media Encoder will be able to read CinemaDNG frame sequences natively. Media Encoder CC will be able to transcode uncompressed CinemaDNG sequences to any format you want with full hardware acceleration, plus the ability to apply Lumetri color grading and BITC to the footage as you go - no longer any need to open After Effects or Premiere Pro to turn a DNG frame sequence into a video file. Premiere Pro will also be able to open CinemaDNG frame sequences.

I know it's not exactly support for MLV, but I'm still pushing for that  ;)

There's a mountain of other improvements, full info at http://acrobatninja.com/2013/09/adobe-new-video-features.html
#74
@aace : If you're working in After Effects* there is absolutely no point in upsampling the footage before import. Switching AE to 16 or 32-bit working space means everything you do on the timeline to clips with a lower bit depth is automatically interpolated if the effect supports that bit depth. Similarly if you export a 16 or 32-bit timeline to an 8-bit media file, After Effects automatically adds a dither pass to mitigate banding. You won't normally see it happen but you can force dithering to affect the preview buffer by applying a null 8-bit effect (e.g. Arithmetic with everything set to zero) on an adjustment layer.

*Converting to a mezzanine format (e.g. ProRes) is useful for NLEs where you need to scrub the timeline a lot, as it's less demanding on your CPU to rapidly-decode the frames. After Effects doesn't work like that, it has an internal buffer with format-agnostic pixel data for every frame (users see it as the RAM Preview).

You shouldn't export to RGBA unless the footage contains a true alpha channel (e.g. footage assembled from a PNGA sequence or a timeline with open masks). It bloats the file with null data, which makes it harder for subsequent software to process.
#75
I'd always rent before buying. You can get something like a Sigma 8mm or Canon 10-22mm for around $70 a week.