Optimal style for HDR shooting

Started by fsander, October 05, 2012, 01:49:55 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

fsander

I am shooting HDR video on canon 550D with 720p@60fps (59.94fps) and in ideal circumstances with A iso100 and B iso800 and shutter speed around 1/120 then obviously I control exposure with iris.
My question is - while shooting two streams (exposures) would you use one of the available flat picture styles - Cinema, Cinestyle, Marvel, flaat_10, Crooked Path e.c.t and if yes, in which stage of post production would you apply Lut/curve to the video.  (I intend to use photometrix workflow to produce .exr HDRs)

KarateBrot

If you convert them to EXR's you definitely should apply the LUT before conversion and save it in 16 bit (otherwise shooting cinestyle and applying the LUT wouldn't make any sense because in 8 bit you would loose all detail in the blacks again). then convert it to EXR's.
BUT if your program wants to get fed with the sRGB color space (for example) you need to apply a sRGB conversion after your LUT and then let your program make EXR's.
the reason is: to let the program guess the right exposure of each pixel they need to respond to light in a linear way or like I said in sRGB (or whatever the program wants it to be)

Edit:
I don't know if the LUTs from the different picture styles cancel out the native sRGB from the mov files as well or if they just linearize the picture style but preserve the sRGB curve. so I can't be sure what color space it is. if you don't know better: just apply the lut in 16 bit and then convert it to exr's and don't mind about sRGB.

can anyone help me out on this? I guess if you apply a LUT (for example for CineStyle) you are still in sRGB. Right or wrong?
If you donate a RED EPIC to me you officially are very cool ;)

deleted.account

Personally I'd just test a few picture styles. Applying the 1D LUT only applies to Cinestyle anyway. Test applying before and after merging see which you like best.

16bit won't gain you anything, nor will EXR and color space is going to be BT709 / sRGB. When you create your EXRs or other image format, really EXRs should be linear light, ie sRGB with no 'gamma' curve. Although Photomatrix will probably handle the linearizing anyway just will expect pngs or jpgs to be 2.2 gamma, EXRs 1.0.

How you produce the image sequences is up to you, how do you intend doing that?

3pointedit

When I shot HDR video last I used Cinestyle to keep the dark areas out of compression noise, as I was gaining many more bits of color space due to the second exposure.

I must get to and use Blender's comp nodes to combine the streams in a real wide gamut space.
550D on ML-roids

KarateBrot

But 16 bit would be essential if you apply a LUT before the conversion so that you have more color values in the dark spots.
Since I don't know if there is a difference if you apply the LUT before or after merging to EXR's just using trial and error to see what's better seems to be the best option, indeed. I wish I would know mathematics a lot better because then I propably had an answer to that.
If you donate a RED EPIC to me you officially are very cool ;)

deleted.account

Applying the 1D LUT would be via an NLE or AE or Nuke or something, so that's best done at higher than 8bit precision yes, but no point creating 16bit images to apply the LUT just work at 32bit float in the NLE or AE.

And if talking about 16bit precision, 32bit is more the norm as a working space and taking an app like AE for example, importing an 8bit file into 16bit workspace differs from importing into 32bit, in a 32bit workspace the 8bit level 0 is centered in the 65536 range so there's room for negative values too, 16bit maps the 8bit levels into 32768 levels prorata.

http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html

But it's academic really, it's the way the initial conversion from YCC video to RGB image sequences is done that perhaps matters more as I'd assume Photomatrix will be working at 32bit precision linear light anyway on even 8bit image sequences.

KarateBrot

Yeah sure, I know about 32 bit linear workspace. My point just was that maybe IF the HDR processing is dependent from applying a LUT BEFORE processing you would better deal with footage higher than 8 bit. Just so that the pixels have got the correct brightness value to interpolate the two frames in a realistic/physical way. But I don't know enough to say if there really is a difference if you apply the LUT before or after merging. (Because even if you're working in 32 bit linear you can alter the pixel's in a non-linear way (baking the gamma) and it's still linear workspace and if you interpolate "wrong" values in a non-linear way (for example gamma 2.2) you will get a totally wrong result. That's why I think it may be important to deal with the exact time to apply the LUT to get the correct color values)
Or do I get your wires crossed?
If you donate a RED EPIC to me you officially are very cool ;)

deleted.account

There's nothing to gain creating 16bit images from 8bit, it just gets padded with zeros, unless you use something like a denoiser for example to make interpolated values going 8 to 16bit image files or even denoise at 32bit in memory within the app, then there might be some value in generating and storing 12mb per frame 16bit image sequences compared to 2mb 8bit frames. But the best place to do that is in memory within a 32bit float workspace.

I get what you're saying but I don't agree with you're point, sorry.  As long as the approach is consistent on both 'streams' either apply the LUT first or after which as said is best done in a 32bit workspace, non linear even for the difference it'll make and how / where else can a LUT be applied on an image sequence other than in a suitable app? Then preferably at higher precision than 8bit.

Again all this LUT stuff just applies to Cinestyle, can just grade it like any other source, don't have to apply the LUT. How would anyone treat flaat10 say, just grade it and that would be after merging?

There are no 'correct brightness' Cinestyle like Neutral, flaat or whatever is just a gamma encoded representation of the scene in sent to the camera encoder as 8bit 0 - 255 JFIF raw 4:2:2 YCC and  twisted into a BT709 primaries, full range luma, BT601 matrixed (T2i, 5D MKII) compressed h264 file. :-)

The closest we get to a 'physical / realistic way' is to blend and merge in linear light after first making an attempt to correctly decompress and more difficult to correctly linearize that twisted h264 source, going back to my earlier comment about more importance on doing the initial YCC to RGB conversion as 'best' we can. For example I don't like the way AE manages that.

Consistent approach applying LUT or grade prior or after treating the two 'streams' the same is all that is necessary.

Quote(Because even if you're working in 32 bit linear you can alter the pixel's in a non-linear way

Not if your app linearizes the source correctly and that using non 32bit aware plugins, filters and effects is avoided. But if you mean you will see a different result compared to applying the LUT in a gamma encoded workflow, sure.

Quote(baking the gamma) and it's still linear workspace and if you interpolate "wrong" values in a non-linear way (for example gamma 2.2) you will get a totally wrong result.

'Wrong' is subjective, the whole process is based on interpretation of the source anyway. It's also dependent on how the application works whether just blending in linear light, color processing may not be done in linear light. There are many steps to just getting an RGB image sequence out that can produce 'wrong' results to get picky about before being concerned about merging Cinestyle or LUT applied version.

QuoteThat's why I think it may be important to deal with the exact time to apply the LUT to get the correct color values)

Fair enough, but 'correct' is subjective, like said earlier just test a 'before'  and an 'after' merging, see which looks the best but that has nothing to do with producing 16bit image sequences which make no improvement.

Just apply the LUT at 32bit precision in the NLE, Compositor before or after. Again this is just to do with Cinestyle, it's just a RGB curve to take a LOG looking image to a more typical rec709 image nothing more, just a grade, nothing to do with 'physically correct brightness'

KarateBrot

My point with the (at least) 16 bit was just concerning the case when you need to pass the sequence along to another program after you applied a LUT (or grading). 32 bit is of course always better like you said.
You're propably right as I shouldn't bother about these things with ugly compressed h.264 footage. If there is not much of a difference I just forget about that. Thanks :)
If you donate a RED EPIC to me you officially are very cool ;)