Some general responses to this thread

:
We get a lot of comments about the UI for SpeedGrade being strange, and I agree that it's very different to anything else in the Creative Suite family. It's a legacy of two things - firstly Adobe have had their hands on the code for only a short time, and so the initial development was all about compatibility - Sg as it looks today is basically the same UI as shipped by IRIDAS. Going forward things are evolving but the DI/CT professionals who use it every day have had
many years to learn a bunch of quirky UIs from the vendor of their choice, so a wholesale change to something that behaves like Premiere Pro or Resolve would annoy far more people than it'd please.
Without wishing to be rude to anyone, Sg was designed for a very specific user - a color timer in a motion picture studio - and so the UI is secondary. Colorists use hardware desks so they don't care if the wheels are fiddly to use with a mouse, and neither did IRIDAS. The concept of 'look' layers is also difficult to grasp without some heavy reading of the manuals, but given the target audience do nothing else all day every day, the industry does somewhat
prefer things to be obscure (a colorist keeps his or her job until the DP finds out a way to do it themselves!). This is also the reason that Sg as of today will only import digital cinema footage rather than stuff like H.264 and AVCHD. That will change in October but the UI is largely static for the time being.
The solution, as you'll know by now, is to steal the Lumetri Color Engine from inside Sg and plug it into the other Adobe applications. You can already do that now in Premiere Pro CC (applying a "look" file as an effect) and come October you'll be able to apply looks directly from Adobe Media Encoder CC, but there's no escaping the need to jump into Sg at some point if you want to create your own looks. Again in October the link between Pr and Sg will finally connect properly.
Will we reach a point when Sg is as intuitive to use as the consumer products? No; but the biggest quirks will be ironed out. Creative Cloud has changed how Adobe see the application landscape, with truckloads of people getting access to programs that in all fairness are beyond their abilities. Dumbing down these top-end applications isn't an option the professional user community would accept so there will always be a cliff-face learning curve between something like Photoshop Elements and SpeedGrade. The hope is to create workflows that the majority of 'prosumer' users can follow which grab snippets of pre-made functionality without necessarily understanding what's happening.
At the basic level, someone with the classic "make my iPhone video look good" question can pick one of the predefined Look files and apply it without needing to know anything about what's being adjusted. Step one level up from that and you can jump across into Sg and fiddle with those defaults, maybe to widen a split tone or burn down the highlights. You will need to read the help file, but not much of it. A colorist who has to shot-match against Macbeth cards and calibrate Alexa log footage for broadcast will lock herself in a basement for 6 months and learn SpeedGrade, then get paid handsomely for her efforts.
In terms of color depth, the sequences in Premiere Pro are
always 32-bit floating point. The default MPE previews aren't (because nobody has a 32-bit monitor) but you can bring in your DNG footage, apply any combination of "/32-ready" effects, and export back out to a lossless format of your choice. A pixel in the input stream will be a pixel in the output stream. In contrast you have to explicitly set the bit depth in AE (because it's far more CPU-intensive to do the comps in /32). The rule in AE for maximum quality is simple - pick a comp depth equal to or higher than the deepest source file you intend to feed it. If the source is a 14-bit ML DNG, there is no significant benefit in going above a /16 comp - it would have a very small effect if you apply some ultra-extreme grades as the interpolation would be narrower, but without the source data in the first place you're not gaining any 'real' pixels. If you're exporting that comp to H.264 for the Web you may as well stick in /8 and tone-map on the way in.
For ML raw video shooters the workflow is absolutely going to improve in October; we're not at the point of supporting MLV as a native file format

but the time it takes to get from a folder of DNGs to a Vimeo-ready file will drop hugely. If you're just transcoding a rush to show someone, you'll be able to do it in AME with full hardware-accelerated rendering. The Mercury Playback Engine in Premiere Pro means that in theory you'll be able to scrub about your CinemaDNG timeline as smoothly as you want; but with all raw footage the bottleneck very firmly arrives at your disk. With a decent GPU and more than 12GB RAM, unless you're serving the footage from an SSD or multi-striped RAID array it will often struggle to read the frames fast enough.
That doesn't mean you need a behemoth of a machine to work effectively, just that you can't expect miracles from a Walmart desktop. I have Premiere Pro CC running on a Microsoft Surface Pro and it limps along OK - nothing I'd want to rely on but as a proof of concept it's as not bad to be transcoding on a 'tablet'. Some of the video pros that post their benchmark figures to Adobe have built things that make my eyes water (I've seen a 20-way RAID cabinet feeding a quad-Xeon board with 4 Tesla cards) but they'll be working on time-critical HD material, such as for broadcast news where a minute of extra rendering means they'd miss air time. The 'average' Premiere Pro CC user hovers just a little above the minimum specs; they tend to have a decent graphics card but their disks are...
