@sonicthehedgehog@live.com:
No idea what Vegas does. 16bit PNG has 3x16bit, ffmpeg ProRes4444 has 4x10bit, no matter what Vegas tells. Maybe Vegas interprets it with such low bitdepth, but then this is (as Danne wrote) an Vegas issue.
What's the point you like to have such high bitdepth after grading? Debayered and WB-burned-in footage have "other" (values behind its) 12bit than 12bit RAW. If you like to keep all RAW information for you NLE, you should use DNG. But then you'll need a good RAW processing engine. And ProRes is also YUV.
@reddeercity: thanks for showing such projects, this is always very interesting to see, what (and how) other guys do. With CUDA the main problem is: I have no hardware to run such code.

With OpenCL we would have a little chance, but what we found until now (e.g. bilinear demosaic) needed longer for copying buffers between RAM and GPU than MLVApp processes the entire picture. So it seems just to be better to have the entire pipeline on GPU - but this would be a 100% new version of our app then.
Very interesting was, that they got the ffmpeg libs used from their app. We tried that when we started with ffmpeg - without success. That's why we have this pipe solution now, which is indeed not very nice on Windows (cmd window).