MLV App 1.14 - All in one MLV Video Post Processing App [Windows, Mac and Linux]

Started by ilia3101, July 08, 2017, 10:19:19 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

theBilalFakhouri

Quote from: masc on September 08, 2019, 08:16:25 PM
It has a gamma curve, yes. Some minutes ago the branch was merged to master, where you can adjust gamma as you like. For now: to be compiled.

Nice News!

Please get back AMAZE debayer for playback before releasing the next version , and Thanks !

Danne

Quote from: Ilia3101 on September 09, 2019, 03:58:22 PM
I agree that that switching moment is confusing in some profiles, I noticed yesterday. Feels a bit like rawtherapee ux. I am very strongly considering changing the default settings in MLV App, maybe having 0 dark strength by default.
Take your time. This feature must have taken a long time. Respect.

masc

Quote from: Ilia3101 on September 09, 2019, 03:58:22 PM
I am very strongly considering changing the default settings in MLV App, maybe having 0 dark strength by default.
But this would mean that importing footage would show "uglier results" by default.

Quote from: theBilalFakhouri on September 09, 2019, 04:06:45 PM
Please get back AMAZE debayer for playback before releasing the next version , and Thanks !
Adding only AMaZE makes no sense... if we add more debayers for playback, we should add all. To be done...
5D3.113 | EOSM.202

ilia3101

Decided I should write an explanation of the upcoming MLV App profile update, so that people can use it properly and understand what's going on inside. Instead of simply selecting a profile, you will be able to individually set "Tonemapping function", "Gamma" and "Gamut".

Here's the explanation of what each new option does, in the order they affect the image:


  • Gamut: Which colour gamut MLV App will convert camera data to. It is simply choosing an RGB colour space except without a transfer function (sometimes called gamma). Only the primaries.



  • Tonemapping function: After the initial three stages of processing: white balance, gamut conversion and exposure, the data is still linear and unclipped, at this point MLV App applies the selected "tonemapping function". This could be any function you want, it could be an actual tonemapping function like Reinhard, which rolls off the highlights smoothly but still keeps a "linear" intention, it could be a transfer/gamma function such as those defined by Rec709 or sRGB, or it could even be a LOG transfer function. The tonemapping functions are offered as presets, but you could add anything you want in the source code of MLV App.


  • Gamma: A numerical value defining gamma as a power. Applied to the image after the tonemapping function. If the tonemapping function already did " gamma", for example the rec709 function, gamma should be 1.0 if you want to stick to rec709 standard strictly. However I would not recommend anyone tries to follow the rec709 standard exactly, it clips really ugly. Much better to select Reinhard as tonemapping function, for smooth highlight rolloff, then approximate the rec709 transfer function on top of that with a gamma value of 2.0



Basically: gamut is your gamut, and tonemapping function + gamma combined are your transfer function. And after that is done, all other MLV App settings are applied to the image.

ilia3101

Quote from: masc on September 09, 2019, 04:33:30 PM
But this would mean that importing footage would show "uglier results" by default.

Could compensate by changing the default gamma to a lower value. We will need to think about it and consider what looks best. Another option is to add a button, maybe to the profile section, that resets all of the creative adjustments not to defaults, but actual zero. Accompanied by an explanation of why you may want to use this button.

ilia3101

@DavidP you may wanna try the new BMDFilm profile in the new update, see if it matches your ursa better. It should do.

2blackbar

Current default settings for film and tonemapped are the best looking IMO and id like them to stay unchanged somewhere as an option even if default look of the footage will change .
I wanted gamma to experiment and make the footage even more like film.

ilia3101

Those presets will be available forever. And if I do change the defaults, I will make sure to keep it on that level of quality.

Quote from: 2blackbar on September 09, 2019, 07:43:20 PM
I wanted gamma to experiment and make the footage even more like film.

My explanation of the new settings a couple of posts up will help with such experiments.

masc

Quote from: theBilalFakhouri on September 09, 2019, 04:06:45 PM
Please get back AMAZE debayer for playback before releasing the next version , and Thanks !
Quote from: masc on September 09, 2019, 04:33:30 PM
Adding only AMaZE makes no sense... if we add more debayers for playback, we should add all. To be done...
Done. Additionally I added a option to prevent MLVApp from switching between viewer-debayer and playback-debayer.
5D3.113 | EOSM.202

Luther

So, I compiled the master branch to test the gamuts. Some notes I took while testing:
- Sony S has warm tones desaturated and shifted towards yellow
- AdobeRGB gets visible chromatic noise with saturated blues
- XYZ is completely off. Now that other gamuts are implemented, is XYZ still relevant?
- ProPhoto has separation issues between the green-blue spectrum. It shifts the WB towards blue. The matrix is D55 instead of D65, maybe?
- Alexa WG seems good
- ACES seems good too, but has a WB shift towards yellow. Has the most smooth chromatic separation, but is very desaturated.

While testing the gamuts, I also noticed other stuff.
Is the denoising made before or after the sharpening? Denoising should always be before sharpening, but I'm not sure MLVApp is doing that, is it?
Also, some ideas from Rawtherapee that lack on MLVApp:
- "Flexible" curves (or "centripetal Catmull–Rom spline curve") are really useful:
http://rawpedia.rawtherapee.com/Exposure#Flexible
- Perceptual curve:
http://rawpedia.rawtherapee.com/Exposure#Perceptual
- Lab adjustments. For example, CC Curve cannot be replicated with MLVApp's HSL:
http://rawpedia.rawtherapee.com/Lab_Adjustments#CC_Curve


Overall, the new gamut offers much better colors. You can clearly see the results in saturated tones. Thanks for all the work @Ilia3101 ! And @masc too for the interface adjustments :)


Danne

Quote from: Luther on September 09, 2019, 09:53:24 PM
So, I compiled the master branch to test the gamuts. Some notes I took while testing:
- Sony S has warm tones desaturated and shifted towards yellow
- AdobeRGB gets visible chromatic noise with saturated blues
- XYZ is completely off. Now that other gamuts are implemented, is XYZ still relevant?
- ProPhoto has separation issues between the green-blue spectrum. It shifts the WB towards blue. The matrix is D55 instead of D65, maybe?
- Alexa WG seems good
- ACES seems good too, but has a WB shift towards yellow. Has the most smooth chromatic separation, but is very desaturated.
Nice testing. How are you verifying/comparing correct gamuts?

Luther

Quote from: Danne on September 09, 2019, 09:58:02 PM
Nice testing. How are you verifying/comparing correct gamuts?

Not very scientific, Danne. I'm using a MLV footage of a pantone (taken with 50D). My display is 99% sRGB, but I'm not sure it is accurate. Here's the pantone footage in case you want to test too:
https://we.tl/t-gA4ixpCkHx

Danne

I see. Thanks. Too tired here atm but might check into it another day.

escho

Compiled master with openSUSE Tumbleweed. No problems.

Did a short test with a video of the International Space Station. I get better details than before out of the box :)

Debayer for playback: Does not change anything on playback. Example: If I chose none (monocrome), the playback video stays colored.

Over all: MLVApp becomes better and better. Great work ...
https://sternenkarten.com/
600D, 6D, openSUSE Tumbleweed

ilia3101

@Luther
It converts camera image to the gamut you select (accurately and correctly), and keeps it in that gamut, so the colours will of course change, as your display does not change it's gamut. All of this is because I have not added an extra conversion at the end of processing, to convert it back in to rec709 or whatever. If I had added this, it would show colours consistently when switching gamuts, but I'll add that later, in a way that keeps the code clean..., more work than it seems.

(BTW the rest of this post is not at all aimed at you Luther, it is just a rant about videotapers and their colour managment approach)

Video people have a certain approach to colour management (took me a long time to understand), and I am following that approach, which is "it's ok to interpret and display in image as a different colour space than it actually is", however they are generally unaware that that's what they do.

If you were to set Alexa Wide Gamut and Alexa Log curve in MLV App, you will get the same colour that comes out of an Alexa, as it is in the alexa colour space. This footage, now in a log colour space is not accurately displayed on any display, you must transform it first to get the colours it represents.

And this whole transforming log thing is what most video people don't seem to know about, or disregard. They think that log is "flat" and that's why it's good, which is not the correct explanation, log is good because it can store the dynamic range very efficiently, it is not a "flat look", it is math. Log spaces tend to have a wide gamut to not clip any chroma values, adding to the percieved "flat look" idea. With log, you will only be able to get the true colour values by doing a transformation to the colour space you actually need, very often rec709, and the best way to do this is, is mathematically... IDTs can be used to transform log, there are also LUTS that do it, and I know resolve and Nuke have plenty of tools to help. Meanwhile video people who are not aware of any of this, just grade the log image straight away, displaying it as rec709 and fudging with curves and tweaking until it looks right (not a good aproach imo).

But it's an approach that works, so I have decided it is valid because it is so widely used. And for video people that do know how to transform colour with IDTs and such, they will be ok too, as they will know what to do with the output.

However for people like me (and I guess luther), who want to actually grade in MLV App with a wider gamut only for the internal processing, outputting results in sRGB/rec709 just as before, this update is not everything.


also @Luther am I right in thinking you want the same as me? A wide gamut internally so that colours do not clip?


And yes, the matrix for prophoto is D50, that's why it's blue, prophoto has D50 it's white point for some reason. Maybe I should adapt it to D65 as all others. The only true solution to this RGB crap is spectral colour processing (probably not coming to MLV App until a full rewrite).


Edit: if any video experts would like to correct me on anything please do. I want to know what I'm getting wrong.

Luther

Quote from: Ilia3101 on September 09, 2019, 11:02:31 PM
also @Luther am I right in thinking you want the same as me? A wide gamut internally so that colours do not clip?
You're exactly right. I agree with everything you said.
Ideally, the preview should be Rec.709 or Rec.2020 (with some box to choose between the two). The raw data is initially converted to the right gamut (such as ACES AP0 or AP1) and then the processing (curves, saturation, luts) goes in between the two convertions (ACES AP1 > Curves > Rec.2020, for example).

Quote
but I'll add that later, in a way that keeps the code clean..., more work than it seems.
We know. I find frustrating how little information there is about these color conversions too.

Quote
Maybe I should adapt it to D65 as all others.
A easier solution would be to just decrease the WB slider :P
Not elegant, but might work.

Quote
The only true solution to this RGB crap is spectral colour processing (probably not coming to MLV App until a full rewrite).
Or try to port OpenColorIO. I have no idea how would that work though.

masc

Quote from: Luther on September 09, 2019, 09:53:24 PM
Is the denoising made before or after the sharpening? Denoising should always be before sharpening, but I'm not sure MLVApp is doing that, is it?
Nope. I know that this would be better, but it is not possible with current realization. So better to not sharpen at all, or afterwards in NLE, if that is somehow important ;)

Quote from: Luther on September 09, 2019, 11:49:48 PM
Or try to port OpenColorIO. I have no idea how would that work though.
OpenColorIO has sooo many dependencies. Nearly none of the dependencies is currently needed. So this is really hard work to bring that to life (infrastructure only). And then the fun starts to understand what this lib does and how it works.
5D3.113 | EOSM.202

2blackbar

Denoise before sharpening would require 2 passes , one to store denoised frame...
Anyway does anyone know how to get rid of that wavyflag effect when stabilizing in MLVApp ? Its like a sublte wobble of footage, like if every corner was stretched /warped a bit to stabilize it, it doesnt look good, any way to disable it so it only rotates the fotage and zooms in/out ?
I see vidstab does it, there are other stabilisators that work better with rolling shutter

Deshaker in vdub was great and i used it a lot, is it possible to merge it ?

masc

Quote from: 2blackbar on September 10, 2019, 03:52:09 PM
Denoise before sharpening would require 2 passes , one to store denoised frame...
That is not really the problem. The problem is, that nearly the complete processing pipeline is multithreaded, but the denoiser works only single threaded. There is a extra pipeline, just for denoising. That's why this is done in the very end of all. If we want to make it better, we need a 3rd processing pipeline... :P

Quote from: 2blackbar on September 10, 2019, 03:52:09 PM
Anyway does anyone know how to get rid of that wavyflag effect when stabilizing in MLVApp ? Its like a sublte wobble of footage, like if every corner was stretched /warped a bit to stabilize it, it doesnt look good, any way to disable it so it only rotates the fotage and zooms in/out ?
No idea what a wavyflag is. But corner-stretching and a "wobble" is the way those stabilizers work. You have the parameters in the MLVApp GUI. Search for "ffmpeg vid.stab" with google to find information what parameter has which effect.

Quote from: 2blackbar on September 10, 2019, 03:52:09 PM
Deshaker in vdub was great and i used it a lot, is it possible to merge it ?
I think no. It is done for MS Visual C++. This is mostly not compatible to what we do. But I don't find any source code.
5D3.113 | EOSM.202

2blackbar

I wanted to use Your workflow with proxies and xml file but im using sony vegas, i can export 2 types of xml files , for  finalcut 7 and finalcut x but none of the xml files work in mlvapp, they show nothing.I linked the files, maybe their structure is different from xml files youre using ?
https://drive.google.com/open?id=1eSuMRESPofDNt1sCp-QWGU6Bnu-rQOD1
https://drive.google.com/open?id=1RkR_tpsloK8JUXllV1kCkqLwX0l0aSL8

masc

Thanks for the files! Never had a Vegas-fcpxml file before. Indeed they are a bit different than FCPX-fcpxml files: the start-element of the clips is named differently. I wrote a quick fix. If you are able to compile, you could try it out now. Here, your fcpxml works now.
I did not paid attention to the old xml format yet.
5D3.113 | EOSM.202

2blackbar

Great! I have to wait for monthly build but im super happy that you managed to make it work

jpegmasterjesse

Quote from: masc on August 22, 2019, 10:27:06 PM
Thanks for your feedback @jpegmasterjesse!
Already done.


Just realized, masc, that I don't actually see those options in windows. In fact, on the latest release the clips just disappear from the sidebar (not deleted in explorer).

And RE: Tabbing through all the sliders, it is possible and feels great, except there is no way to know which slider you are currently tabbed to.

Also, would it be possible to change the increments the values change in? Could Shift+LeftArrow make it move in increments of 10, for example?

Thanks for all your wonderful work.

Luther

Quote from: masc on September 10, 2019, 10:51:59 AM
Nope. I know that this would be better, but it is not possible with current realization.

Got it.
I've found the perceptual curve implementation in rawtherapee:

// this is a generic cubic spline implementation, to clean up we could probably use something already existing elsewhere
void PerceptualToneCurve::cubic_spline(const float x[], const float y[], const int len, const float out_x[], float out_y[], const int out_len)
{
    int i, j;

    float **A = (float **)malloc(2 * len * sizeof(*A));
    float *As = (float *)calloc(1, 2 * len * 2 * len * sizeof(*As));
    float *b = (float *)calloc(1, 2 * len * sizeof(*b));
    float *c = (float *)calloc(1, 2 * len * sizeof(*c));
    float *d = (float *)calloc(1, 2 * len * sizeof(*d));

    for (i = 0; i < 2 * len; i++) {
        A[i] = &As[2 * len * i];
    }

    for (i = len - 1; i > 0; i--) {
        b[i] = (y[i] - y[i - 1]) / (x[i] - x[i - 1]);
        d[i - 1] = x[i] - x[i - 1];
    }

    for (i = 1; i < len - 1; i++) {
        A[i][i] = 2 * (d[i - 1] + d[i]);

        if (i > 1) {
            A[i][i - 1] = d[i - 1];
            A[i - 1][i] = d[i - 1];
        }

        A[i][len - 1] = 6 * (b[i + 1] - b[i]);
    }

    for(i = 1; i < len - 2; i++) {
        float v = A[i + 1][i] / A[i][i];

        for(j = 1; j <= len - 1; j++) {
            A[i + 1][j] -= v * A[i][j];
        }
    }

    for(i = len - 2; i > 0; i--) {
        float acc = 0;

        for(j = i; j <= len - 2; j++) {
            acc += A[i][j] * c[j];
        }

        c[i] = (A[i][len - 1] - acc) / A[i][i];
    }

    for (i = 0; i < out_len; i++) {
        float x_out = out_x[i];
        float y_out = 0;

        for (j = 0; j < len - 1; j++) {
            if (x[j] <= x_out && x_out <= x[j + 1]) {
                float v = x_out - x[j];
                y_out = y[j] +
                        ((y[j + 1] - y[j]) / d[j] - (2 * d[j] * c[j] + c[j + 1] * d[j]) / 6) * v +
                        (c[j] * 0.5) * v * v +
                        ((c[j + 1] - c[j]) / (6 * d[j])) * v * v * v;
            }
        }

        out_y[i] = y_out;
    }

    free(A);
    free(As);
    free(b);
    free(c);
    free(d);
}

// generic function for finding minimum of f(x) in the a-b range using the interval halving method
float PerceptualToneCurve::find_minimum_interval_halving(float (*func)(float x, void *arg), void *arg, float a, float b, float tol, int nmax)
{
    float L = b - a;
    float x = (a + b) * 0.5;

    for (int i = 0; i < nmax; i++) {
        float f_x = func(x, arg);

        if ((b - a) * 0.5 < tol) {
            return x;
        }

        float x1 = a + L / 4;
        float f_x1 = func(x1, arg);

        if (f_x1 < f_x) {
            b = x;
            x = x1;
        } else {
            float x2 = b - L / 4;
            float f_x2 = func(x2, arg);

            if (f_x2 < f_x) {
                a = x;
                x = x2;
            } else {
                a = x1;
                b = x2;
            }
        }

        L = b - a;
    }

    return x;
}

struct find_tc_slope_fun_arg {
    const ToneCurve * tc;
};

float PerceptualToneCurve::find_tc_slope_fun(float k, void *arg)
{
    struct find_tc_slope_fun_arg *a = (struct find_tc_slope_fun_arg *)arg;
    float areasum = 0;
    const int steps = 10;

    for (int i = 0; i < steps; i++) {
        float x = 0.1 + ((float)i / (steps - 1)) * 0.5; // testing (sRGB) range [0.1 - 0.6], ie ignore highligths and dark shadows
        float y = CurveFactory::gamma2(a->tc->lutToneCurve[CurveFactory::igamma2(x) * 65535] / 65535.0);
        float y1 = k * x;

        if (y1 > 1) {
            y1 = 1;
        }

        areasum += (y - y1) * (y - y1); // square is a rough approx of (twice) the area, but it's fine for our purposes
    }

    return areasum;
}

float PerceptualToneCurve::get_curve_val(float x, float range[2], float lut[], size_t lut_size)
{
    float xm = (x - range[0]) / (range[1] - range[0]) * (lut_size - 1);

    if (xm <= 0) {
        return lut[0];
    }

    int idx = (int)xm;

    if (idx >= static_cast<int>(lut_size) - 1) {
        return lut[lut_size - 1];
    }

    float d = xm - (float)idx; // [0 .. 1]
    return (1.0 - d) * lut[idx] + d * lut[idx + 1];
}

// calculate a single value that represents the contrast of the tone curve
float PerceptualToneCurve::calculateToneCurveContrastValue() const
{

    // find linear y = k*x the best approximates the curve, which is the linear scaling/exposure component that does not contribute any contrast

    // Note: the analysis is made on the gamma encoded curve, as the LUT is linear we make backwards gamma to
    struct find_tc_slope_fun_arg arg = { this };
    float k = find_minimum_interval_halving(find_tc_slope_fun, &arg, 0.1, 5.0, 0.01, 20); // normally found in 8 iterations
    //fprintf(stderr, "average slope: %f\n", k);

    float maxslope = 0;
    {
        // look at midtone slope
        const float xd = 0.07;
        const float tx0[] = { 0.30, 0.35, 0.40, 0.45 }; // we only look in the midtone range

        for (size_t i = 0; i < sizeof(tx0) / sizeof(tx0[0]); i++) {
            float x0 = tx0[i] - xd;
            float y0 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x0) * 65535.f] / 65535.f) - k * x0;
            float x1 = tx0[i] + xd;
            float y1 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x1) * 65535.f] / 65535.f) - k * x1;
            float slope = 1.0 + (y1 - y0) / (x1 - x0);

            if (slope > maxslope) {
                maxslope = slope;
            }
        }

        // look at slope at (light) shadows and (dark) highlights
        float e_maxslope = 0;
        {
            const float tx[] = { 0.20, 0.25, 0.50, 0.55 }; // we only look in the midtone range

            for (size_t i = 0; i < sizeof(tx) / sizeof(tx[0]); i++) {
                float x0 = tx[i] - xd;
                float y0 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x0) * 65535.f] / 65535.f) - k * x0;
                float x1 = tx[i] + xd;
                float y1 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x1) * 65535.f] / 65535.f) - k * x1;
                float slope = 1.0 + (y1 - y0) / (x1 - x0);

                if (slope > e_maxslope) {
                    e_maxslope = slope;
                }
            }
        }
        //fprintf(stderr, "%.3f %.3f\n", maxslope, e_maxslope);
        // midtone slope is more important for contrast, but weigh in some slope from brights and darks too.
        maxslope = maxslope * 0.7 + e_maxslope * 0.3;
    }
    return maxslope;
}

void PerceptualToneCurve::BatchApply(const size_t start, const size_t end, float *rc, float *gc, float *bc, const PerceptualToneCurveState &state) const
{
    const AdobeToneCurve& adobeTC = static_cast<const AdobeToneCurve&>((const ToneCurve&) * this);

    for (size_t i = start; i < end; ++i) {
        const bool oog_r = OOG(rc[i]);
        const bool oog_g = OOG(gc[i]);
        const bool oog_b = OOG(bc[i]);

        if (oog_r && oog_g && oog_b) {
            continue;
        }
       
        float r = CLIP(rc[i]);
        float g = CLIP(gc[i]);
        float b = CLIP(bc[i]);

        if (!state.isProphoto) {
            // convert to prophoto space to make sure the same result is had regardless of working color space
            float newr = state.Working2Prophoto[0][0] * r + state.Working2Prophoto[0][1] * g + state.Working2Prophoto[0][2] * b;
            float newg = state.Working2Prophoto[1][0] * r + state.Working2Prophoto[1][1] * g + state.Working2Prophoto[1][2] * b;
            float newb = state.Working2Prophoto[2][0] * r + state.Working2Prophoto[2][1] * g + state.Working2Prophoto[2][2] * b;
            r = newr;
            g = newg;
            b = newb;
        }

        float ar = r;
        float ag = g;
        float ab = b;
        adobeTC.Apply(ar, ag, ab);

        if (ar >= 65535.f && ag >= 65535.f && ab >= 65535.f) {
            // clip fast path, will also avoid strange colours of clipped highlights
            //rc[i] = gc[i] = bc[i] = 65535.f;
            if (!oog_r) rc[i] = 65535.f;
            if (!oog_g) gc[i] = 65535.f;
            if (!oog_b) bc[i] = 65535.f;
            continue;
        }

        if (ar <= 0.f && ag <= 0.f && ab <= 0.f) {
            //rc[i] = gc[i] = bc[i] = 0;
            if (!oog_r) rc[i] = 0.f;
            if (!oog_g) gc[i] = 0.f;
            if (!oog_b) bc[i] = 0.f;
            continue;
        }

        // ProPhoto constants for luminance, that is xyz_prophoto[1][]
        constexpr float Yr = 0.2880402f;
        constexpr float Yg = 0.7118741f;
        constexpr float Yb = 0.0000857f;

        // we use the Adobe (RGB-HSV hue-stabilized) curve to decide luminance, which generally leads to a less contrasty result
        // compared to a pure luminance curve. We do this to be more compatible with the most popular curves.
        const float oldLuminance = r * Yr + g * Yg + b * Yb;
        const float newLuminance = ar * Yr + ag * Yg + ab * Yb;
        const float Lcoef = newLuminance / oldLuminance;
        r = LIM<float>(r * Lcoef, 0.f, 65535.f);
        g = LIM<float>(g * Lcoef, 0.f, 65535.f);
        b = LIM<float>(b * Lcoef, 0.f, 65535.f);

        // move to JCh so we can modulate chroma based on the global contrast-related chroma scaling factor
        float x, y, z;
        Color::Prophotoxyz(r, g, b, x, y, z);

        float J, C, h;
        Ciecam02::xyz2jch_ciecam02float( J, C, h,
                                         aw, fl,
                                         x * 0.0015259022f,  y * 0.0015259022f,  z * 0.0015259022f,
                                         xw, yw,  zw,
                                         c,  nc, pow1, nbb, ncb, cz, d);


        if (!isfinite(J) || !isfinite(C) || !isfinite(h)) {
            // this can happen for dark noise colours or colours outside human gamut. Then we just return the curve's result.
            if (!state.isProphoto) {
                float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
                float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
                float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
                r = newr;
                g = newg;
                b = newb;
            }
            if (!oog_r) rc[i] = r;
            if (!oog_g) gc[i] = g;
            if (!oog_b) bc[i] = b;

            continue;
        }

        float cmul = state.cmul_contrast; // chroma scaling factor

        // depending on color, the chroma scaling factor can be fine-tuned below

        {
            // decrease chroma scaling slightly of extremely saturated colors
            float saturated_scale_factor = 0.95f;
            constexpr float lolim = 35.f; // lower limit, below this chroma all colors will keep original chroma scaling factor
            constexpr float hilim = 60.f; // high limit, above this chroma the chroma scaling factor is multiplied with the saturated scale factor value above

            if (C < lolim) {
                // chroma is low enough, don't scale
                saturated_scale_factor = 1.f;
            } else if (C < hilim) {
                // S-curve transition between low and high limit
                float cx = (C - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1.f - cx);
                }

                saturated_scale_factor = (1.f - cx) + saturated_scale_factor * cx;
            } else {
                // do nothing, high saturation color, keep scale factor
            }

            cmul *= saturated_scale_factor;
        }

        {
            // increase chroma scaling slightly of shadows
            float nL = Color::gamma2curve[newLuminance]; // apply gamma so we make comparison and transition with a more perceptual lightness scale
            float dark_scale_factor = 1.20f;
            //float dark_scale_factor = 1.0 + state.debug.p2 / 100.0f;
            constexpr float lolim = 0.15f;
            constexpr float hilim = 0.50f;

            if (nL < lolim) {
                // do nothing, keep scale factor
            } else if (nL < hilim) {
                // S-curve transition
                float cx = (nL - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                dark_scale_factor = dark_scale_factor * (1.0f - cx) + cx;
            } else {
                dark_scale_factor = 1.f;
            }

            cmul *= dark_scale_factor;
        }

        {
            // to avoid strange CIECAM02 chroma errors on close-to-shadow-clipping colors we reduce chroma scaling towards 1.0 for black colors
            float dark_scale_factor = 1.f / cmul;
            constexpr float lolim = 4.f;
            constexpr float hilim = 7.f;

            if (J < lolim) {
                // do nothing, keep scale factor
            } else if (J < hilim) {
                // S-curve transition
                float cx = (J - lolim) / (hilim - lolim);

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                dark_scale_factor = dark_scale_factor * (1.f - cx) + cx;
            } else {
                dark_scale_factor = 1.f;
            }

            cmul *= dark_scale_factor;
        }

        C *= cmul;

        Ciecam02::jch2xyz_ciecam02float( x, y, z,
                                         J, C, h,
                                         xw, yw,  zw,
                                         c, nc, pow1, nbb, ncb, fl, cz, d, aw );

        if (!isfinite(x) || !isfinite(y) || !isfinite(z)) {
            // can happen for colours on the rim of being outside gamut, that worked without chroma scaling but not with. Then we return only the curve's result.
            if (!state.isProphoto) {
                float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
                float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
                float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
                r = newr;
                g = newg;
                b = newb;
            }

            if (!oog_r) rc[i] = r;
            if (!oog_g) gc[i] = g;
            if (!oog_b) bc[i] = b;

            continue;
        }

        Color::xyz2Prophoto(x, y, z, r, g, b);
        r *= 655.35f;
        g *= 655.35f;
        b *= 655.35f;
        r = LIM<float>(r, 0.f, 65535.f);
        g = LIM<float>(g, 0.f, 65535.f);
        b = LIM<float>(b, 0.f, 65535.f);

        {
            // limit saturation increase in rgb space to avoid severe clipping and flattening in extreme highlights

            // we use the RGB-HSV hue-stable "Adobe" curve as reference. For S-curve contrast it increases
            // saturation greatly, but desaturates extreme highlights and thus provide a smooth transition to
            // the white point. However the desaturation effect is quite strong so we make a weighting
            const float as = Color::rgb2s(ar, ag, ab);
            const float s = Color::rgb2s(r, g, b);

            const float sat_scale = as <= 0.f ? 1.f : s / as; // saturation scale compared to Adobe curve
            float keep = 0.2f;
            constexpr float lolim = 1.00f; // only mix in the Adobe curve if we have increased saturation compared to it
            constexpr float hilim = 1.20f;

            if (sat_scale < lolim) {
                // saturation is low enough, don't desaturate
                keep = 1.f;
            } else if (sat_scale < hilim) {
                // S-curve transition
                float cx = (sat_scale - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                keep = (1.f - cx) + keep * cx;
            } else {
                // do nothing, very high increase, keep minimum amount
            }

            if (keep < 1.f) {
                // mix in some of the Adobe curve result
                r = intp(keep, r, ar);
                g = intp(keep, g, ag);
                b = intp(keep, b, ab);
            }
        }

        if (!state.isProphoto) {
            float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
            float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
            float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
            r = newr;
            g = newg;
            b = newb;
        }
        if (!oog_r) rc[i] = r;
        if (!oog_g) gc[i] = g;
        if (!oog_b) bc[i] = b;
    }
}
float PerceptualToneCurve::cf_range[2];
float PerceptualToneCurve::cf[1000];
float PerceptualToneCurve::f, PerceptualToneCurve::c, PerceptualToneCurve::nc, PerceptualToneCurve::yb, PerceptualToneCurve::la, PerceptualToneCurve::xw, PerceptualToneCurve::yw, PerceptualToneCurve::zw;
float PerceptualToneCurve::n, PerceptualToneCurve::d, PerceptualToneCurve::nbb, PerceptualToneCurve::ncb, PerceptualToneCurve::cz, PerceptualToneCurve::aw, PerceptualToneCurve::wh, PerceptualToneCurve::pfl, PerceptualToneCurve::fl, PerceptualToneCurve::pow1;

void PerceptualToneCurve::init()
{

    // init ciecam02 state, used for chroma scalings
    xw = 96.42f;
    yw = 100.0f;
    zw = 82.49f;
    yb = 20;
    la = 20;
    f  = 1.00f;
    c  = 0.69f;
    nc = 1.00f;

    Ciecam02::initcam1float(yb, 1.f, f, la, xw, yw, zw, n, d, nbb, ncb,
                            cz, aw, wh, pfl, fl, c);
    pow1 = pow_F( 1.64f - pow_F( 0.29f, n ), 0.73f );

    {
        // init contrast-value-to-chroma-scaling conversion curve

        // contrast value in the left column, chroma scaling in the right. Handles for a spline.
        // Put the columns in a file (without commas) and you can plot the spline with gnuplot: "plot 'curve.txt' smooth csplines"
        // A spline can easily get overshoot issues so if you fine-tune the values here make sure that the resulting spline is smooth afterwards, by
        // plotting it for example.
        const float p[] = {
            0.60, 0.70, // lowest contrast
            0.70, 0.80,
            0.90, 0.94,
            0.99, 1.00,
            1.00, 1.00, // 1.0 (linear curve) to 1.0, no scaling
            1.07, 1.00,
            1.08, 1.00,
            1.11, 1.02,
            1.20, 1.08,
            1.30, 1.12,
            1.80, 1.20,
            2.00, 1.22  // highest contrast
        };

        const size_t in_len = sizeof(p) / sizeof(p[0]) / 2;
        float in_x[in_len];
        float in_y[in_len];

        for (size_t i = 0; i < in_len; i++) {
            in_x[i] = p[2 * i + 0];
            in_y[i] = p[2 * i + 1];
        }

        const size_t out_len = sizeof(cf) / sizeof(cf[0]);
        float out_x[out_len];

        for (size_t i = 0; i < out_len; i++) {
            out_x[i] = in_x[0] + (in_x[in_len - 1] - in_x[0]) * (float)i / (out_len - 1);
        }

        cubic_spline(in_x, in_y, in_len, out_x, cf, out_len);
        cf_range[0] = in_x[0];
        cf_range[1] = in_x[in_len - 1];
    }
}

void PerceptualToneCurve::initApplyState(PerceptualToneCurveState & state, Glib::ustring workingSpace) const
{

    // Get the curve's contrast value, and convert to a chroma scaling
    const float contrast_value = calculateToneCurveContrastValue();
    state.cmul_contrast = get_curve_val(contrast_value, cf_range, cf, sizeof(cf) / sizeof(cf[0]));
    //fprintf(stderr, "contrast value: %f => chroma scaling %f\n", contrast_value, state.cmul_contrast);



And also the Catmull curve. There's this standalone implementation and the one from rawtherapee:


/*****************************************************************************
* Catmull Rom Spline
* (https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline)
*****************************************************************************/

namespace {

inline double pow2(double x)
{
    return x*x;
}


inline double catmull_rom_tj(double ti,
                             double xi, double yi,
                             double xj, double yj)
{
    // see https://github.com/Beep6581/RawTherapee/pull/4701#issuecomment-414054187
    static constexpr double alpha = 0.375;
    return pow(sqrt(pow2(xj-xi) + pow2(yj-yi)), alpha) + ti;
}


inline void catmull_rom_spline(int n_points,
                               double p0_x, double p0_y,
                               double p1_x, double p1_y,
                               double p2_x, double p2_y,
                               double p3_x, double p3_y,
                               std::vector<double> &res_x,
                               std::vector<double> &res_y)
{
    res_x.reserve(n_points);
    res_y.reserve(n_points);

    double t0 = 0;
    double t1 = catmull_rom_tj(t0, p0_x, p0_y, p1_x, p1_y);
    double t2 = catmull_rom_tj(t1, p1_x, p1_y, p2_x, p2_y);
    double t3 = catmull_rom_tj(t2, p2_x, p2_y, p3_x, p3_y);

    double space = (t2-t1) / n_points;

    res_x.push_back(p1_x);
    res_y.push_back(p1_y);

    // special case, a segment at 0 or 1 is computed exactly
    if (p1_y == p2_y && (p1_y == 0 || p1_y == 1)) {
        for (int i = 1; i < n_points-1; ++i) {
            double t = p1_x + space * i;
            if (t >= p2_x) {
                break;
            }
            res_x.push_back(t);
            res_y.push_back(p1_y);
        }
    } else {
        for (int i = 1; i < n_points-1; ++i) {
            double t = t1 + space * i;
       
            double c = (t1 - t)/(t1 - t0);
            double d = (t - t0)/(t1 - t0);
            double A1_x = c * p0_x + d * p1_x;
            double A1_y = c * p0_y + d * p1_y;

            c = (t2 - t)/(t2 - t1);
            d = (t - t1)/(t2 - t1);
            double A2_x = c * p1_x + d * p2_x;
            double A2_y = c * p1_y + d * p2_y;

            c = (t3 - t)/(t3 - t2);
            d = (t - t2)/(t3 - t2);
            double A3_x = c * p2_x + d * p3_x;
            double A3_y = c * p2_y + d * p3_y;

            c = (t2 - t)/(t2 - t0);
            d = (t - t0)/(t2 - t0);
            double B1_x = c * A1_x + d * A2_x;
            double B1_y = c * A1_y + d * A2_y;

            c = (t3 - t)/(t3 - t1);
            d = (t - t1)/(t3 - t1);
            double B2_x = c * A2_x + d * A3_x;
            double B2_y = c * A2_y + d * A3_y;       

            c = (t2 - t)/(t2 - t1);
            d = (t - t1)/(t2 - t1);
            double C_x = c * B1_x + d * B2_x;
            double C_y = c * B1_y + d * B2_y;

            res_x.push_back(C_x);
            res_y.push_back(C_y);
        }
    }

    res_x.push_back(p2_x);
    res_y.push_back(p2_y);
}


inline void catmull_rom_reflect(double px, double py, double cx, double cy,
                                double &rx, double &ry)
{
#if 0
    double dx = px - cx;
    double dy = py - cy;
    rx = cx - dx;
    ry = cy - dy;
#else
    // see https://github.com/Beep6581/RawTherapee/pull/4701#issuecomment-414054187
    static constexpr double epsilon = 1e-5;
    double dx = px - cx;
    double dy = py - cy;
    rx = cx - dx * 0.01;
    ry = dx > epsilon ? (dy / dx) * (rx - cx) + cy : cy;
#endif   
}


void catmull_rom_chain(int n_points, int n_cp, double *x, double *y,
                       std::vector<double> &res_x, std::vector<double> &res_y)
{
    double x_first, y_first;
    double x_last, y_last;
    catmull_rom_reflect(x[1], y[1], x[0], y[0], x_first, y_first);
    catmull_rom_reflect(x[n_cp-2], y[n_cp-2], x[n_cp-1], y[n_cp-1], x_last, y_last);

    int segments = n_cp - 1;

    res_x.reserve(n_points);
    res_y.reserve(n_points);

    for (int i = 0; i < segments; ++i) {
        int n = max(int(n_points * (x[i+1] - x[i]) + 0.5), 2);
        catmull_rom_spline(
            n, i == 0 ? x_first : x[i-1], i == 0 ? y_first : y[i-1],
            x[i], y[i], x[i+1], y[i+1],
            i == segments-1 ? x_last : x[i+2],
            i == segments-1 ? y_last : y[i+2],
            res_x, res_y);
    }
}

} // namespace


void DiagonalCurve::catmull_rom_set()
{
    int n_points = max(ppn * 65, 65000);
    poly_x.clear();
    poly_y.clear();
    catmull_rom_chain(n_points, N, x, y, poly_x, poly_y);
}

/*****************************************************************************/


I've also found the L*a*b adjustments, but it's way too complex to even try to implement on MLVApp (relies on some rtengine code).

masc

Quote from: jpegmasterjesse on September 12, 2019, 12:37:05 AM
Just realized, masc, that I don't actually see those options in windows. In fact, on the latest release the clips just disappear from the sidebar (not deleted in explorer).
Which revision did you compile?
Quote from: jpegmasterjesse on September 12, 2019, 12:37:05 AM
And RE: Tabbing through all the sliders, it is possible and feels great, except there is no way to know which slider you are currently tabbed to.
I agree. But I think I can't really help here. You just could compile with system theme instead of dark theme (you have to comment out 1..2 line(s) for that). If I remember right, Windows native elements show a dotted line around active elements. But the app is really ugly then...  :P
Quote from: jpegmasterjesse on September 12, 2019, 12:37:05 AM
Also, would it be possible to change the increments the values change in? Could Shift+LeftArrow make it move in increments of 10, for example?
This should work using "pg up / pg down" buttons.


@Luther: What exactly is this? You quoted my post from the post with the topic order sharpening/denoising. How is it related? Looks like it has to do with profiles somehow...
5D3.113 | EOSM.202