Digital Intermediaries

Started by weldroid, August 21, 2012, 06:33:56 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

weldroid

I do a lot of processing, not everything within Premiere (my main app for doing effects and cutting).

A typical example is a HDR video or timelapse: I usually do tone mapping offline with a tool like the VirtualDub-based toolchain, or that, combined with Luminance HDR or Photomatix pro (for tone mapping).

Thing is, I usually end up with a bunch of .tiff-s or .png-s, directories around 10 gigs or so, which is too much for archiving (I like to archive sources from which I compile my cuts). Long story short: I want to turn those still files into some kind of single (compressed) file.

So far I have been saving 200 Mbps H.264 files (high profile, every frame is a keyframe), because it is:
- compressed
- so far I could not differentiate between framegrabs from the 200 Mbps material and the originals, not even with heavy processing on them.

Of course I might be wrong, and I will figure out one day this is not the right thing to do, so let me know if you have a better or different workflow!  :D The only thing worrying me at this point is that (at least with HDR video) I get three consecutive (and lossy) compressions: one in the camera, one for the intermediary and one for the final result. At least from my audio experience, three consecutive lossy compressions are NEVER a good idea...
Weapon of choice:
600D, EF-S 18-55 ISII Premiere, Luminance HDR, Blender, Luxrender
http://www.vimeo.com/weldroid (http://soundcloud.com/weldroid)

3pointedit

To see the difference try this. Export some frames using both methods, image sequence and video file. Load both in an editing program or photoshop. Line up the same timecode/frame and subtract one from the other. This can be acieved by layering one on top and applying an image math effect, like add or multiply but subtract. You should then findd that the only parts of the image to remain are the artifacts.
550D on ML-roids

weldroid

Great sugestion!

Actually, the results are a bit of shocking: not even 120 Mbps shows one tiny bit a difference from the original, not even in 100% view. It is possible to demonstrate some difference if I add an insane amount of brightness (100) contrast (74), but that is some kind of crazy magnification of the problem. Even then, half of the difference seems to be mpeg artifacts from the first generation (camera) compression.

There is a barely noticeable difference between 200 Mbps and 120 Mbps (probably 200 Mbps is better, but I could not do a double-blind test this morning), and things get just a little bit uglier if not every frame is a keyframe (as expected by me at least at such a high bitrate).
Weapon of choice:
600D, EF-S 18-55 ISII Premiere, Luminance HDR, Blender, Luxrender
http://www.vimeo.com/weldroid (http://soundcloud.com/weldroid)

tobi

One further possibility to analyze the difference between two images is PerceptualDiff (or short pdiff) (http://pdiff.sourceforge.net/).

Michael Zöller

What about compressing to codecs like Cineform, Prores or DNxHD?
neoluxx.de
EOS 5D Mark II | EOS 600D | EF 24-70mm f/2.8 | Tascam DR-40

1%

Much better than editing in H264. That PDF program looks usefull for testing encoder changes.