Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - stevefal

Share Your Photos / Re: Teal-orange look
July 01, 2022, 09:35:16 AM
The orange face is over the top. Whites of the eyes should not have that color cast.

Find a quality reference image with similar composition and compare them while making changes. Teal shadows will create the perception of orange in highlights. You don't have to push them. But regardless, skintones should be perceived as natural in the context of the grade. That's where this fails.

Think about the mood the image should evoke, and research techniques to achieve it. Neither the action, composition or grade tell me what this image is saying. It's as blank as his expression.

General Development / Re: Hypothesis about HDR
May 20, 2017, 11:05:52 PM
Quote from: stevefal on May 20, 2017, 10:36:05 PM
For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.
No, brain fart. You're right, in this context, alignment and ghost removal both are desirable when used to solve the ISO flip-flop motion artifacts.

However since alignment would leave missing pixels on the trailing boundary of the image, that would have to be accounted for.
General Development / Re: Hypothesis about HDR
May 20, 2017, 10:36:05 PM
For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.

I don't know why you'd want to average clipped highlights or noisy shadows into your final image. Maybe I don't understand the math, but I assume you only want to include pixels representing detail.
General Development / Re: Hypothesis about HDR
May 20, 2017, 03:02:03 AM
Simple averaging is not enough for quality HDR though, right?

Reading a bit, I see that "ghost removal" is a feature of some HDR stacking algorithms, including Photoshop's. Lot of discussion, but not sure if there's open source code for it.
General Development / Re: Hypothesis about HDR
May 19, 2017, 10:54:14 PM
Yes, optical flow can be used to simulate motion. Of course the simulation is based on a simplistic assumption of motion, not what really happened in the scene. For example, a light moving between two frames could actually be two alternating lights. The practical impact of such a model when applied to a subset of pixels is hard to predict, and image-dependent. Optical flow is also compute intensive.

Heh, if optical flow was perfect, it could be used to generate all the missing exposures in the original ISO flip-flop model, and eliminate temporal artifacts in the first place.

At 48/60fps, the ISO flip-flop method has its drawbacks, but the artifacts are less mysterious. Another neat benefit is that you essentially get two videos for one, both 24/30p and 180 degrees. Merging is not required, but could be used only when recovery is needed.

Or... I think the knock-out merge algorithm would use a motion vector threshold to forfeit DR when it will produce a (significant) artifact. That way static scenes are 100% HDR, and less so as motion gets crazy.

To me, all this new stuff is more interesting wrt supersampling than it is to slo-mo and rez-mania. This video drives it home: Look at those 1080p shots from the Alexa.
General Development / Re: Hypothesis about HDR
May 19, 2017, 09:54:20 PM
Quote from: a1ex on April 04, 2017, 09:58:44 PM
You gave me an idea
It's cool that motion blurs would be adjacent, but that's at the expense of desirable motion blur in that exposure in the first place. If the short exposure is, say, 90 degrees for a 2-stop recovery, that motion blur will be 45 degrees on output, or 1/4 the blur length. That means that any object seen from that exposure will have more of the strobe effect we usually don't want. That artifact might be more noticeable since those areas will have more defined edges.

I think the OP's idea, assuming alternating ISOs, is pretty interesting in that if each exposure is 360 degrees, the resulting HDR frames will have desirable 180 degree blur, with half the spatial error due to temporal offset between exposures - 1/60 vs 1/30s. And the long blur will help hide it.
Quote from: dmilligan on May 19, 2017, 08:05:45 PM
Not without aliasing, dual ISO introduces aliasing in the areas where the two ISOs don't overlap (highlights and shadows).

I assumed that halving resolution would eliminate/reduce aliasing, since the highlight/shadow lines would no longer be doubled.
(4K + dual_iso)/2 = 1920x800 @ 14+ stops DR w/o aliasing?
Reverse Engineering / Re: ProcessTwoInTwoOutLosslessPath
December 31, 2016, 02:51:26 AM
Hey guys, I noticed this news on the interwebs. Exciting. I'll surely be back to give video a try if it leads to that.
I don't understand your objection. You seem to be advocating GPL for them, and he is correctly pointing out that GPL's share-alike requirement would kill their prospects among mainstream tools. BSD's non-copyleft approach makes them viable in the mainstream. That strategy looks sound but you seem to be crying foul over "freedom and community" platitudes, but without a cogent explanation.
Good find. Looks like it has quality issues on the USB, but it also has DC according to this review:

And here's one for half the price (1368x766) without touch:

The company:

Not sure why you think so. I'm currently using 12-bit full-HD DNGs (5D3) from RAWMagic, and they're playing back nicely in real time. PPro CC 2104, MacbookPro Retina mid-2012 quad i7.

The only issue I'm seeing, which brought me here, is that the preview image while playing is slightly softer than when still, or when rendered. It looks like PPro might be cutting a corner during playback, possibly for better performance.

I notice this only with my new large grading monitor, so it is subtle.

Has anyone else seen that?

[EDIT] by the way, my experience with RAW in CC2014 has me virtually decided to switch to this workflow. And I positively swore by ACR>AE>ProRES 4444 previously. But man, those AE rendering times.
Is 24-bit audio recording conceivably possible via mlv_rec, or is the camera (5D3) limited to 16?
My numbers are from testing I did months ago. I wanted to determine how dynamic range of an image impacted the behavior of various exposure controls, and for each of the Processes.

In each test, I tracked the value of an individual pixel before and after making ACR exposure changes, and compared across images that had the same value in that pixel location (150,150), but different overall image dynamic range. A red value means that there was a difference, which essentially amounts to the flicker issue.

The last test image corresponds to seb_'s experiment, in which a small amount of black added to the gradgray_white image negates the differences otherwise introduced. This is only true for Process 2010.
Here is my test data and results. Process 2012 is a disaster for this issue:



I looked at my data and yes, I confirmed that Process 2010 recovery and fill are tricked with black lines. It does not work with Process 2003 or 2012, as those algorithms consider the number of white/black pixels.

I did not try the white line case.
Well clearly that had a huge impact. I'll have to look at my test data and see if I did the two-line test with Process 2010.

Your unaltered footage seems to clearly show extreme microcontrast, which I assume is from the highlight recovery. I mean, the white clouds are darker than the blue sky, which is weird.

Anyway, cool result.
Works! With about -1EV of ML digital ISO, the RAW zebras almost match the false color highlight pattern on my external monitor. So in the RAW case, ML digital ISO is like HDMI gain to me.

The negative ML digital ISO steps are -0.1, -0.2, ... -0.7, -1.0, -1.5, .2.0.  Are those camera dependent, or is it possible that granularity could be made 0.1 throughout the range?
Taking your cue from the digital ISO comment, I see that "ML Digital ISO" lets me drop up to -2EV. Is this also not affecting the recorded data? If so it seems I could use it to calibrate HDMI highlight clipping to somewhat match RAW clipping?
I'd like to record RAW with an external monitor that has it's own waveform, histogram, zebras etc. But I don't understand the relationship between pixel values send via HDMI and the values recorded. I'm under the impression that normally, what I get via HDMI throws 1 bit off the top and the remainder off the bottom. If this is correct, my external monitor would show clipping 1 bit before it's actually happening. Can you correct me on this?

Finally, ML has LV brightness, contrast and saturation settings for the internal monitor, but they do not apply to the external output. Is that a camera limitation, or is it possible to send via HDMI, say the RAW 8 MSBs for highlight preservation, or even a low-contrast version?
I don't think this works reliably. I believe that ACR's algorithms behave based on the number of white/black or highlight/shadow pixels in the image, not just the existence of some. For instance, an image that is mostly black with a little light, will be changed differently than one which is mostly white with a little dark.

The "two lines" approach is the first thing I tested when measuring ACR's behavior, and it did not improve the flickering problem. But trust me, I'd love to be wrong.

Can you demonstrate your technique working with aggressive ACR settings?

Also, what you mean by "grade my H.264 videos in ACR?" How is that possible?
It is a pen display. The HDMI input is for displaying the output from your PC/Mac, in order for the stylus to draw right on the image of what you are making in Photoshop etc.

Agree rare, this is the only one I've ever seen.
Here's an Android tablet with HDMI input:

Pretty expensive but neat. It's a drawing tablet primarily.
Feature Requests / Re: Audio output through HDMI
August 01, 2014, 10:34:00 AM
Talking of Hyperdeck Shuttle and RAW seems misinformed. Neither Canon nor ML send RAW data via HDMI. The HDMI signal is 8-bit.