Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Luther

Pages: [1] 2 3 4
1
General Help Q&A / Re: Need help finding my next film-like camera
« on: September 17, 2019, 06:06:06 AM »
As far as price/quality goes, EOS M with a 'speedbooster' seems to be the best option. It also has the advantage of being lightweight, so you can use cheap gimbals on it.

2
Even if g3gg0 trusted them a while ago, I don't share the same opinion.

I agree. They require shady javascript. Using iframes linking to JS bloated websites is bad.
I've been using Pictshare for some time and it works. They seem to require JS for upload, but they give a hard link after it and it loads only the file, nothing more (unlike imgur and imgbb).
Of course, if you want something trustworthy you need to self-host.

Another idea for more privacy would be to redirect youtube links to Invidious.

3
Shoot preparation / Re: Reduce Motion Blur in car-side-window shooting?
« on: September 17, 2019, 05:49:24 AM »
What FPS are you recording? ML has a feature called "FPS Override", which provides some options regarding the "jerkiness" you talked about.
I personally always record at 24.000 FPS (using "Exact FPS" in the FPS Override menu) and with 1/48 shutter speed. This is called "180-degree rule", in case you want to research...

slowing it down a bit (20 fps), adding optical flow and just a little bit of motion blur.

Hurr. This doesn't seem good to me. Adding frame interpolation will mess your footage and won't provide any benefits.

4
General Development Discussion / Re: Apertus MLV Format Discussion
« on: September 17, 2019, 05:44:05 AM »
I have zero experience with format design, but here's some ideas. If something sounds too dumb it probably is, so just ignore it:
- Better compression (already being worked by Fares - thanks!)
- More accurate color informations (already described by Ilia)
- Metadata for cinema lenses (Cooke /i seems to be the new "standard" in the industry, even Arri is adopting it IIRC)
- MXF support
- AES256 encryption
- Support for other audio codecs? (FLAC/Opus)
- SDK with permissive license (BSD or ISC) and documentation, so others can easily add support in their software/hardware.
- Embedded darkframe/flat-field (?)

5
@Luther: What exactly is this? You quoted my post from the post with the topic order sharpening/denoising. How is it related? Looks like it has to do with profiles somehow...

Ah, sorry @masc. It was about the suggestions I gave in the post that you replied ("flexible" and perceptual curves to MLVApp). I think these are very useful, but I've only seem this on rawtherapee. Would be great if the RT code could be adapted to MLVApp. So I linked above part of this code.

6
General Development Discussion / Re: Apertus MLV Format Discussion
« on: September 12, 2019, 04:34:50 AM »
Apertus has some freedoms ML doesn't have. For example, using Zstandard instead of LJ92 could possibly offer smaller sizes, but would require FPGA programming, which ML can't do. Or expansion of the metadata to include more accurate color related informations (spectral data, for example). Or add Cooke /i metadata for cinema lenses.
So in order to maintain compatibility, Apertus will have to be limited to what Canon cameras can do. I don't think this is a wise step, as Apertus can go much further/better due to the nature of not being limited by any company/hardware...

7
Nope. I know that this would be better, but it is not possible with current realization.

Got it.
I've found the perceptual curve implementation in rawtherapee:
Code: [Select]
// this is a generic cubic spline implementation, to clean up we could probably use something already existing elsewhere
void PerceptualToneCurve::cubic_spline(const float x[], const float y[], const int len, const float out_x[], float out_y[], const int out_len)
{
    int i, j;

    float **A = (float **)malloc(2 * len * sizeof(*A));
    float *As = (float *)calloc(1, 2 * len * 2 * len * sizeof(*As));
    float *b = (float *)calloc(1, 2 * len * sizeof(*b));
    float *c = (float *)calloc(1, 2 * len * sizeof(*c));
    float *d = (float *)calloc(1, 2 * len * sizeof(*d));

    for (i = 0; i < 2 * len; i++) {
        A[i] = &As[2 * len * i];
    }

    for (i = len - 1; i > 0; i--) {
        b[i] = (y[i] - y[i - 1]) / (x[i] - x[i - 1]);
        d[i - 1] = x[i] - x[i - 1];
    }

    for (i = 1; i < len - 1; i++) {
        A[i][i] = 2 * (d[i - 1] + d[i]);

        if (i > 1) {
            A[i][i - 1] = d[i - 1];
            A[i - 1][i] = d[i - 1];
        }

        A[i][len - 1] = 6 * (b[i + 1] - b[i]);
    }

    for(i = 1; i < len - 2; i++) {
        float v = A[i + 1][i] / A[i][i];

        for(j = 1; j <= len - 1; j++) {
            A[i + 1][j] -= v * A[i][j];
        }
    }

    for(i = len - 2; i > 0; i--) {
        float acc = 0;

        for(j = i; j <= len - 2; j++) {
            acc += A[i][j] * c[j];
        }

        c[i] = (A[i][len - 1] - acc) / A[i][i];
    }

    for (i = 0; i < out_len; i++) {
        float x_out = out_x[i];
        float y_out = 0;

        for (j = 0; j < len - 1; j++) {
            if (x[j] <= x_out && x_out <= x[j + 1]) {
                float v = x_out - x[j];
                y_out = y[j] +
                        ((y[j + 1] - y[j]) / d[j] - (2 * d[j] * c[j] + c[j + 1] * d[j]) / 6) * v +
                        (c[j] * 0.5) * v * v +
                        ((c[j + 1] - c[j]) / (6 * d[j])) * v * v * v;
            }
        }

        out_y[i] = y_out;
    }

    free(A);
    free(As);
    free(b);
    free(c);
    free(d);
}

// generic function for finding minimum of f(x) in the a-b range using the interval halving method
float PerceptualToneCurve::find_minimum_interval_halving(float (*func)(float x, void *arg), void *arg, float a, float b, float tol, int nmax)
{
    float L = b - a;
    float x = (a + b) * 0.5;

    for (int i = 0; i < nmax; i++) {
        float f_x = func(x, arg);

        if ((b - a) * 0.5 < tol) {
            return x;
        }

        float x1 = a + L / 4;
        float f_x1 = func(x1, arg);

        if (f_x1 < f_x) {
            b = x;
            x = x1;
        } else {
            float x2 = b - L / 4;
            float f_x2 = func(x2, arg);

            if (f_x2 < f_x) {
                a = x;
                x = x2;
            } else {
                a = x1;
                b = x2;
            }
        }

        L = b - a;
    }

    return x;
}

struct find_tc_slope_fun_arg {
    const ToneCurve * tc;
};

float PerceptualToneCurve::find_tc_slope_fun(float k, void *arg)
{
    struct find_tc_slope_fun_arg *a = (struct find_tc_slope_fun_arg *)arg;
    float areasum = 0;
    const int steps = 10;

    for (int i = 0; i < steps; i++) {
        float x = 0.1 + ((float)i / (steps - 1)) * 0.5; // testing (sRGB) range [0.1 - 0.6], ie ignore highligths and dark shadows
        float y = CurveFactory::gamma2(a->tc->lutToneCurve[CurveFactory::igamma2(x) * 65535] / 65535.0);
        float y1 = k * x;

        if (y1 > 1) {
            y1 = 1;
        }

        areasum += (y - y1) * (y - y1); // square is a rough approx of (twice) the area, but it's fine for our purposes
    }

    return areasum;
}

float PerceptualToneCurve::get_curve_val(float x, float range[2], float lut[], size_t lut_size)
{
    float xm = (x - range[0]) / (range[1] - range[0]) * (lut_size - 1);

    if (xm <= 0) {
        return lut[0];
    }

    int idx = (int)xm;

    if (idx >= static_cast<int>(lut_size) - 1) {
        return lut[lut_size - 1];
    }

    float d = xm - (float)idx; // [0 .. 1]
    return (1.0 - d) * lut[idx] + d * lut[idx + 1];
}

// calculate a single value that represents the contrast of the tone curve
float PerceptualToneCurve::calculateToneCurveContrastValue() const
{

    // find linear y = k*x the best approximates the curve, which is the linear scaling/exposure component that does not contribute any contrast

    // Note: the analysis is made on the gamma encoded curve, as the LUT is linear we make backwards gamma to
    struct find_tc_slope_fun_arg arg = { this };
    float k = find_minimum_interval_halving(find_tc_slope_fun, &arg, 0.1, 5.0, 0.01, 20); // normally found in 8 iterations
    //fprintf(stderr, "average slope: %f\n", k);

    float maxslope = 0;
    {
        // look at midtone slope
        const float xd = 0.07;
        const float tx0[] = { 0.30, 0.35, 0.40, 0.45 }; // we only look in the midtone range

        for (size_t i = 0; i < sizeof(tx0) / sizeof(tx0[0]); i++) {
            float x0 = tx0[i] - xd;
            float y0 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x0) * 65535.f] / 65535.f) - k * x0;
            float x1 = tx0[i] + xd;
            float y1 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x1) * 65535.f] / 65535.f) - k * x1;
            float slope = 1.0 + (y1 - y0) / (x1 - x0);

            if (slope > maxslope) {
                maxslope = slope;
            }
        }

        // look at slope at (light) shadows and (dark) highlights
        float e_maxslope = 0;
        {
            const float tx[] = { 0.20, 0.25, 0.50, 0.55 }; // we only look in the midtone range

            for (size_t i = 0; i < sizeof(tx) / sizeof(tx[0]); i++) {
                float x0 = tx[i] - xd;
                float y0 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x0) * 65535.f] / 65535.f) - k * x0;
                float x1 = tx[i] + xd;
                float y1 = CurveFactory::gamma2(lutToneCurve[CurveFactory::igamma2(x1) * 65535.f] / 65535.f) - k * x1;
                float slope = 1.0 + (y1 - y0) / (x1 - x0);

                if (slope > e_maxslope) {
                    e_maxslope = slope;
                }
            }
        }
        //fprintf(stderr, "%.3f %.3f\n", maxslope, e_maxslope);
        // midtone slope is more important for contrast, but weigh in some slope from brights and darks too.
        maxslope = maxslope * 0.7 + e_maxslope * 0.3;
    }
    return maxslope;
}

void PerceptualToneCurve::BatchApply(const size_t start, const size_t end, float *rc, float *gc, float *bc, const PerceptualToneCurveState &state) const
{
    const AdobeToneCurve& adobeTC = static_cast<const AdobeToneCurve&>((const ToneCurve&) * this);

    for (size_t i = start; i < end; ++i) {
        const bool oog_r = OOG(rc[i]);
        const bool oog_g = OOG(gc[i]);
        const bool oog_b = OOG(bc[i]);

        if (oog_r && oog_g && oog_b) {
            continue;
        }
       
        float r = CLIP(rc[i]);
        float g = CLIP(gc[i]);
        float b = CLIP(bc[i]);

        if (!state.isProphoto) {
            // convert to prophoto space to make sure the same result is had regardless of working color space
            float newr = state.Working2Prophoto[0][0] * r + state.Working2Prophoto[0][1] * g + state.Working2Prophoto[0][2] * b;
            float newg = state.Working2Prophoto[1][0] * r + state.Working2Prophoto[1][1] * g + state.Working2Prophoto[1][2] * b;
            float newb = state.Working2Prophoto[2][0] * r + state.Working2Prophoto[2][1] * g + state.Working2Prophoto[2][2] * b;
            r = newr;
            g = newg;
            b = newb;
        }

        float ar = r;
        float ag = g;
        float ab = b;
        adobeTC.Apply(ar, ag, ab);

        if (ar >= 65535.f && ag >= 65535.f && ab >= 65535.f) {
            // clip fast path, will also avoid strange colours of clipped highlights
            //rc[i] = gc[i] = bc[i] = 65535.f;
            if (!oog_r) rc[i] = 65535.f;
            if (!oog_g) gc[i] = 65535.f;
            if (!oog_b) bc[i] = 65535.f;
            continue;
        }

        if (ar <= 0.f && ag <= 0.f && ab <= 0.f) {
            //rc[i] = gc[i] = bc[i] = 0;
            if (!oog_r) rc[i] = 0.f;
            if (!oog_g) gc[i] = 0.f;
            if (!oog_b) bc[i] = 0.f;
            continue;
        }

        // ProPhoto constants for luminance, that is xyz_prophoto[1][]
        constexpr float Yr = 0.2880402f;
        constexpr float Yg = 0.7118741f;
        constexpr float Yb = 0.0000857f;

        // we use the Adobe (RGB-HSV hue-stabilized) curve to decide luminance, which generally leads to a less contrasty result
        // compared to a pure luminance curve. We do this to be more compatible with the most popular curves.
        const float oldLuminance = r * Yr + g * Yg + b * Yb;
        const float newLuminance = ar * Yr + ag * Yg + ab * Yb;
        const float Lcoef = newLuminance / oldLuminance;
        r = LIM<float>(r * Lcoef, 0.f, 65535.f);
        g = LIM<float>(g * Lcoef, 0.f, 65535.f);
        b = LIM<float>(b * Lcoef, 0.f, 65535.f);

        // move to JCh so we can modulate chroma based on the global contrast-related chroma scaling factor
        float x, y, z;
        Color::Prophotoxyz(r, g, b, x, y, z);

        float J, C, h;
        Ciecam02::xyz2jch_ciecam02float( J, C, h,
                                         aw, fl,
                                         x * 0.0015259022f,  y * 0.0015259022f,  z * 0.0015259022f,
                                         xw, yw,  zw,
                                         c,  nc, pow1, nbb, ncb, cz, d);


        if (!isfinite(J) || !isfinite(C) || !isfinite(h)) {
            // this can happen for dark noise colours or colours outside human gamut. Then we just return the curve's result.
            if (!state.isProphoto) {
                float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
                float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
                float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
                r = newr;
                g = newg;
                b = newb;
            }
            if (!oog_r) rc[i] = r;
            if (!oog_g) gc[i] = g;
            if (!oog_b) bc[i] = b;

            continue;
        }

        float cmul = state.cmul_contrast; // chroma scaling factor

        // depending on color, the chroma scaling factor can be fine-tuned below

        {
            // decrease chroma scaling slightly of extremely saturated colors
            float saturated_scale_factor = 0.95f;
            constexpr float lolim = 35.f; // lower limit, below this chroma all colors will keep original chroma scaling factor
            constexpr float hilim = 60.f; // high limit, above this chroma the chroma scaling factor is multiplied with the saturated scale factor value above

            if (C < lolim) {
                // chroma is low enough, don't scale
                saturated_scale_factor = 1.f;
            } else if (C < hilim) {
                // S-curve transition between low and high limit
                float cx = (C - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1.f - cx);
                }

                saturated_scale_factor = (1.f - cx) + saturated_scale_factor * cx;
            } else {
                // do nothing, high saturation color, keep scale factor
            }

            cmul *= saturated_scale_factor;
        }

        {
            // increase chroma scaling slightly of shadows
            float nL = Color::gamma2curve[newLuminance]; // apply gamma so we make comparison and transition with a more perceptual lightness scale
            float dark_scale_factor = 1.20f;
            //float dark_scale_factor = 1.0 + state.debug.p2 / 100.0f;
            constexpr float lolim = 0.15f;
            constexpr float hilim = 0.50f;

            if (nL < lolim) {
                // do nothing, keep scale factor
            } else if (nL < hilim) {
                // S-curve transition
                float cx = (nL - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                dark_scale_factor = dark_scale_factor * (1.0f - cx) + cx;
            } else {
                dark_scale_factor = 1.f;
            }

            cmul *= dark_scale_factor;
        }

        {
            // to avoid strange CIECAM02 chroma errors on close-to-shadow-clipping colors we reduce chroma scaling towards 1.0 for black colors
            float dark_scale_factor = 1.f / cmul;
            constexpr float lolim = 4.f;
            constexpr float hilim = 7.f;

            if (J < lolim) {
                // do nothing, keep scale factor
            } else if (J < hilim) {
                // S-curve transition
                float cx = (J - lolim) / (hilim - lolim);

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                dark_scale_factor = dark_scale_factor * (1.f - cx) + cx;
            } else {
                dark_scale_factor = 1.f;
            }

            cmul *= dark_scale_factor;
        }

        C *= cmul;

        Ciecam02::jch2xyz_ciecam02float( x, y, z,
                                         J, C, h,
                                         xw, yw,  zw,
                                         c, nc, pow1, nbb, ncb, fl, cz, d, aw );

        if (!isfinite(x) || !isfinite(y) || !isfinite(z)) {
            // can happen for colours on the rim of being outside gamut, that worked without chroma scaling but not with. Then we return only the curve's result.
            if (!state.isProphoto) {
                float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
                float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
                float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
                r = newr;
                g = newg;
                b = newb;
            }

            if (!oog_r) rc[i] = r;
            if (!oog_g) gc[i] = g;
            if (!oog_b) bc[i] = b;

            continue;
        }

        Color::xyz2Prophoto(x, y, z, r, g, b);
        r *= 655.35f;
        g *= 655.35f;
        b *= 655.35f;
        r = LIM<float>(r, 0.f, 65535.f);
        g = LIM<float>(g, 0.f, 65535.f);
        b = LIM<float>(b, 0.f, 65535.f);

        {
            // limit saturation increase in rgb space to avoid severe clipping and flattening in extreme highlights

            // we use the RGB-HSV hue-stable "Adobe" curve as reference. For S-curve contrast it increases
            // saturation greatly, but desaturates extreme highlights and thus provide a smooth transition to
            // the white point. However the desaturation effect is quite strong so we make a weighting
            const float as = Color::rgb2s(ar, ag, ab);
            const float s = Color::rgb2s(r, g, b);

            const float sat_scale = as <= 0.f ? 1.f : s / as; // saturation scale compared to Adobe curve
            float keep = 0.2f;
            constexpr float lolim = 1.00f; // only mix in the Adobe curve if we have increased saturation compared to it
            constexpr float hilim = 1.20f;

            if (sat_scale < lolim) {
                // saturation is low enough, don't desaturate
                keep = 1.f;
            } else if (sat_scale < hilim) {
                // S-curve transition
                float cx = (sat_scale - lolim) / (hilim - lolim); // x = [0..1], 0 at lolim, 1 at hilim

                if (cx < 0.5f) {
                    cx = 2.f * SQR(cx);
                } else {
                    cx = 1.f - 2.f * SQR(1 - cx);
                }

                keep = (1.f - cx) + keep * cx;
            } else {
                // do nothing, very high increase, keep minimum amount
            }

            if (keep < 1.f) {
                // mix in some of the Adobe curve result
                r = intp(keep, r, ar);
                g = intp(keep, g, ag);
                b = intp(keep, b, ab);
            }
        }

        if (!state.isProphoto) {
            float newr = state.Prophoto2Working[0][0] * r + state.Prophoto2Working[0][1] * g + state.Prophoto2Working[0][2] * b;
            float newg = state.Prophoto2Working[1][0] * r + state.Prophoto2Working[1][1] * g + state.Prophoto2Working[1][2] * b;
            float newb = state.Prophoto2Working[2][0] * r + state.Prophoto2Working[2][1] * g + state.Prophoto2Working[2][2] * b;
            r = newr;
            g = newg;
            b = newb;
        }
        if (!oog_r) rc[i] = r;
        if (!oog_g) gc[i] = g;
        if (!oog_b) bc[i] = b;
    }
}
float PerceptualToneCurve::cf_range[2];
float PerceptualToneCurve::cf[1000];
float PerceptualToneCurve::f, PerceptualToneCurve::c, PerceptualToneCurve::nc, PerceptualToneCurve::yb, PerceptualToneCurve::la, PerceptualToneCurve::xw, PerceptualToneCurve::yw, PerceptualToneCurve::zw;
float PerceptualToneCurve::n, PerceptualToneCurve::d, PerceptualToneCurve::nbb, PerceptualToneCurve::ncb, PerceptualToneCurve::cz, PerceptualToneCurve::aw, PerceptualToneCurve::wh, PerceptualToneCurve::pfl, PerceptualToneCurve::fl, PerceptualToneCurve::pow1;

void PerceptualToneCurve::init()
{

    // init ciecam02 state, used for chroma scalings
    xw = 96.42f;
    yw = 100.0f;
    zw = 82.49f;
    yb = 20;
    la = 20;
    f  = 1.00f;
    c  = 0.69f;
    nc = 1.00f;

    Ciecam02::initcam1float(yb, 1.f, f, la, xw, yw, zw, n, d, nbb, ncb,
                            cz, aw, wh, pfl, fl, c);
    pow1 = pow_F( 1.64f - pow_F( 0.29f, n ), 0.73f );

    {
        // init contrast-value-to-chroma-scaling conversion curve

        // contrast value in the left column, chroma scaling in the right. Handles for a spline.
        // Put the columns in a file (without commas) and you can plot the spline with gnuplot: "plot 'curve.txt' smooth csplines"
        // A spline can easily get overshoot issues so if you fine-tune the values here make sure that the resulting spline is smooth afterwards, by
        // plotting it for example.
        const float p[] = {
            0.60, 0.70, // lowest contrast
            0.70, 0.80,
            0.90, 0.94,
            0.99, 1.00,
            1.00, 1.00, // 1.0 (linear curve) to 1.0, no scaling
            1.07, 1.00,
            1.08, 1.00,
            1.11, 1.02,
            1.20, 1.08,
            1.30, 1.12,
            1.80, 1.20,
            2.00, 1.22  // highest contrast
        };

        const size_t in_len = sizeof(p) / sizeof(p[0]) / 2;
        float in_x[in_len];
        float in_y[in_len];

        for (size_t i = 0; i < in_len; i++) {
            in_x[i] = p[2 * i + 0];
            in_y[i] = p[2 * i + 1];
        }

        const size_t out_len = sizeof(cf) / sizeof(cf[0]);
        float out_x[out_len];

        for (size_t i = 0; i < out_len; i++) {
            out_x[i] = in_x[0] + (in_x[in_len - 1] - in_x[0]) * (float)i / (out_len - 1);
        }

        cubic_spline(in_x, in_y, in_len, out_x, cf, out_len);
        cf_range[0] = in_x[0];
        cf_range[1] = in_x[in_len - 1];
    }
}

void PerceptualToneCurve::initApplyState(PerceptualToneCurveState & state, Glib::ustring workingSpace) const
{

    // Get the curve's contrast value, and convert to a chroma scaling
    const float contrast_value = calculateToneCurveContrastValue();
    state.cmul_contrast = get_curve_val(contrast_value, cf_range, cf, sizeof(cf) / sizeof(cf[0]));
    //fprintf(stderr, "contrast value: %f => chroma scaling %f\n", contrast_value, state.cmul_contrast);


And also the Catmull curve. There's this standalone implementation and the one from rawtherapee:

Code: [Select]
/*****************************************************************************
 * Catmull Rom Spline
 * (https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline)
 *****************************************************************************/

namespace {

inline double pow2(double x)
{
    return x*x;
}


inline double catmull_rom_tj(double ti,
                             double xi, double yi,
                             double xj, double yj)
{
    // see https://github.com/Beep6581/RawTherapee/pull/4701#issuecomment-414054187
    static constexpr double alpha = 0.375;
    return pow(sqrt(pow2(xj-xi) + pow2(yj-yi)), alpha) + ti;
}


inline void catmull_rom_spline(int n_points,
                               double p0_x, double p0_y,
                               double p1_x, double p1_y,
                               double p2_x, double p2_y,
                               double p3_x, double p3_y,
                               std::vector<double> &res_x,
                               std::vector<double> &res_y)
{
    res_x.reserve(n_points);
    res_y.reserve(n_points);

    double t0 = 0;
    double t1 = catmull_rom_tj(t0, p0_x, p0_y, p1_x, p1_y);
    double t2 = catmull_rom_tj(t1, p1_x, p1_y, p2_x, p2_y);
    double t3 = catmull_rom_tj(t2, p2_x, p2_y, p3_x, p3_y);

    double space = (t2-t1) / n_points;

    res_x.push_back(p1_x);
    res_y.push_back(p1_y);

    // special case, a segment at 0 or 1 is computed exactly
    if (p1_y == p2_y && (p1_y == 0 || p1_y == 1)) {
        for (int i = 1; i < n_points-1; ++i) {
            double t = p1_x + space * i;
            if (t >= p2_x) {
                break;
            }
            res_x.push_back(t);
            res_y.push_back(p1_y);
        }
    } else {
        for (int i = 1; i < n_points-1; ++i) {
            double t = t1 + space * i;
       
            double c = (t1 - t)/(t1 - t0);
            double d = (t - t0)/(t1 - t0);
            double A1_x = c * p0_x + d * p1_x;
            double A1_y = c * p0_y + d * p1_y;

            c = (t2 - t)/(t2 - t1);
            d = (t - t1)/(t2 - t1);
            double A2_x = c * p1_x + d * p2_x;
            double A2_y = c * p1_y + d * p2_y;

            c = (t3 - t)/(t3 - t2);
            d = (t - t2)/(t3 - t2);
            double A3_x = c * p2_x + d * p3_x;
            double A3_y = c * p2_y + d * p3_y;

            c = (t2 - t)/(t2 - t0);
            d = (t - t0)/(t2 - t0);
            double B1_x = c * A1_x + d * A2_x;
            double B1_y = c * A1_y + d * A2_y;

            c = (t3 - t)/(t3 - t1);
            d = (t - t1)/(t3 - t1);
            double B2_x = c * A2_x + d * A3_x;
            double B2_y = c * A2_y + d * A3_y;       

            c = (t2 - t)/(t2 - t1);
            d = (t - t1)/(t2 - t1);
            double C_x = c * B1_x + d * B2_x;
            double C_y = c * B1_y + d * B2_y;

            res_x.push_back(C_x);
            res_y.push_back(C_y);
        }
    }

    res_x.push_back(p2_x);
    res_y.push_back(p2_y);
}


inline void catmull_rom_reflect(double px, double py, double cx, double cy,
                                double &rx, double &ry)
{
#if 0
    double dx = px - cx;
    double dy = py - cy;
    rx = cx - dx;
    ry = cy - dy;
#else
    // see https://github.com/Beep6581/RawTherapee/pull/4701#issuecomment-414054187
    static constexpr double epsilon = 1e-5;
    double dx = px - cx;
    double dy = py - cy;
    rx = cx - dx * 0.01;
    ry = dx > epsilon ? (dy / dx) * (rx - cx) + cy : cy;
#endif   
}


void catmull_rom_chain(int n_points, int n_cp, double *x, double *y,
                       std::vector<double> &res_x, std::vector<double> &res_y)
{
    double x_first, y_first;
    double x_last, y_last;
    catmull_rom_reflect(x[1], y[1], x[0], y[0], x_first, y_first);
    catmull_rom_reflect(x[n_cp-2], y[n_cp-2], x[n_cp-1], y[n_cp-1], x_last, y_last);

    int segments = n_cp - 1;

    res_x.reserve(n_points);
    res_y.reserve(n_points);

    for (int i = 0; i < segments; ++i) {
        int n = max(int(n_points * (x[i+1] - x[i]) + 0.5), 2);
        catmull_rom_spline(
            n, i == 0 ? x_first : x[i-1], i == 0 ? y_first : y[i-1],
            x[i], y[i], x[i+1], y[i+1],
            i == segments-1 ? x_last : x[i+2],
            i == segments-1 ? y_last : y[i+2],
            res_x, res_y);
    }
}

} // namespace


void DiagonalCurve::catmull_rom_set()
{
    int n_points = max(ppn * 65, 65000);
    poly_x.clear();
    poly_y.clear();
    catmull_rom_chain(n_points, N, x, y, poly_x, poly_y);
}

/*****************************************************************************/

I've also found the L*a*b adjustments, but it's way too complex to even try to implement on MLVApp (relies on some rtengine code).

8
also @Luther am I right in thinking you want the same as me? A wide gamut internally so that colours do not clip?
You're exactly right. I agree with everything you said.
Ideally, the preview should be Rec.709 or Rec.2020 (with some box to choose between the two). The raw data is initially converted to the right gamut (such as ACES AP0 or AP1) and then the processing (curves, saturation, luts) goes in between the two convertions (ACES AP1 > Curves > Rec.2020, for example).

Quote
but I'll add that later, in a way that keeps the code clean..., more work than it seems.
We know. I find frustrating how little information there is about these color conversions too.

Quote
Maybe I should adapt it to D65 as all others.
A easier solution would be to just decrease the WB slider :P
Not elegant, but might work.

Quote
The only true solution to this RGB crap is spectral colour processing (probably not coming to MLV App until a full rewrite).
Or try to port OpenColorIO. I have no idea how would that work though.

9
And is it even an "algorithm" if no one wrote the actual method?  :)

Good point. I guess they are comparing with normal interpolation methods, such as Lanczos or Spline64.
The hybrid approach seems exciting:
https://people.xiph.org/~jm/demo/rnnoise/

10
Nice testing. How are you verifying/comparing correct gamuts?

Not very scientific, Danne. I'm using a MLV footage of a pantone (taken with 50D). My display is 99% sRGB, but I'm not sure it is accurate. Here's the pantone footage in case you want to test too:
https://we.tl/t-gA4ixpCkHx

11
So, I compiled the master branch to test the gamuts. Some notes I took while testing:
- Sony S has warm tones desaturated and shifted towards yellow
- AdobeRGB gets visible chromatic noise with saturated blues
- XYZ is completely off. Now that other gamuts are implemented, is XYZ still relevant?
- ProPhoto has separation issues between the green-blue spectrum. It shifts the WB towards blue. The matrix is D55 instead of D65, maybe?
- Alexa WG seems good
- ACES seems good too, but has a WB shift towards yellow. Has the most smooth chromatic separation, but is very desaturated.

While testing the gamuts, I also noticed other stuff.
Is the denoising made before or after the sharpening? Denoising should always be before sharpening, but I'm not sure MLVApp is doing that, is it?
Also, some ideas from Rawtherapee that lack on MLVApp:
- "Flexible" curves (or "centripetal Catmull–Rom spline curve") are really useful:
http://rawpedia.rawtherapee.com/Exposure#Flexible
- Perceptual curve:
http://rawpedia.rawtherapee.com/Exposure#Perceptual
- Lab adjustments. For example, CC Curve cannot be replicated with MLVApp's HSL:
http://rawpedia.rawtherapee.com/Lab_Adjustments#CC_Curve


Overall, the new gamut offers much better colors. You can clearly see the results in saturated tones. Thanks for all the work @Ilia3101 ! And @masc too for the interface adjustments :)


12
It has a gamma curve, yes. Some minutes ago the branch was merged to master, where you can adjust gamma as you like. For now: to be compiled.

Hoah! I'll compile this week and do some tests!

13
This is great @masc! It's not shaky at all. I can't see any aliasing in this, which is great as one of the issues of EOS M for me was the aliasing.
The only issue I see is that it is very soft and has chromatic aberration, because of the wide open aperture. But it fits well as a artistic choice in the clip.
Would like to see how it performs with a very sharp lens while recording straight lines (buildings, etc). Did you use @Danne's script to smooth aliasing?
About AVIR, from github:
Quote
"The author even believes this algorithm provides the "ultimate" level of quality (for an orthogonal resizing) which cannot be increased further"

Humm... dunno about that. There's some pretty amazing super-res algorithms, like ESRGAN. Also, every year they are improving these algorithms. I don't think there's a "ultimate" solution to the interpolation problem.
But, yeah, AVIR seems great.

14
Share Your Videos / Re: industrial music video
« on: September 07, 2019, 02:56:48 PM »
Damn, that's some nice music here. I like how you used metals to make some sounds. I find synth great too. Have you ever listened to Perturbator? I think it would be a great inspiration for you (even though recommending his work is a clichè by now, his album "New Model" is just too awesome not to recommend).
On the video itself, the youtube compression always mess everything. But it was nice. I just think some neon lights (or electroluminescent wires) would fit better the "mood", kinda like those scenes from Dark City (1998) or Nirvana (1997) :)

edit: would just like to add one more thing. Grains. For some reason I think the industrial style fits well with film grains.

15
don't look half as good when exported as DNG and opened in Lightroom

DNG is a raw format. None of the changes you make in it will take effect on Lightroom.

16
General Chat / Re: Another 645 example, with the help of ML
« on: September 02, 2019, 08:42:37 AM »
With a Dual-ISO approach I get a lift in the shadows, and keep the number of brackets to the minimum, i.e. 8.

But you will also generate more aliasing and have more noise. With HDRMerge you can have 32bit DNGs with minimum noise and full dynamic range. Grade those in, say, Rawtherapee and then construct the pano...
That's what I do, at least.

17
General Chat / Re: Another 645 example, with the help of ML
« on: September 02, 2019, 02:12:57 AM »
Cool! I had one Mamiya medium-format 80mm f/2.8 lens too (adapted to EOS). It was very sharp when closed to f/8.0, but not bright enought to me (got a Nikkor f/1.2 now).
You could have used HDRMerge instead of Dual-ISO ;)

18
Camera-specific discussion / Re: Canon EOS M
« on: August 30, 2019, 03:06:30 PM »
That's some nice work Danne and Jip-Hop. Thanks. Can't wait to get a EOS M to play with.

19
Really gentle mood. I like it.
The only issue I see is the aliasing. But there's not much we can do about it.

20
Does any colour expert know if the Adobe matrices we all use in open source software are actually neutral?

Not sure Ilia. @Andy600 have previously linked to this research about camera spectral data. And here is the actual matrices. Maybe if you put those on MLVApp and compare with the Adobe matrices you can see how close they are?

21
Raw Video Postprocessing / Re: dng color grading
« on: August 26, 2019, 02:18:26 AM »
Hihi, does anyone have any advise on the best workflow of color grading .dng files in davinci resolve? Thank you!

If you have the MLV files, try MLVApp instead. If not, try to use ACEScct inside Resolve...

22
@Luther Nice to hear from you again! This attempt is going much better, way more simple. Should be able to do a release quite soon.

Also thanks again for all the links you have sent since whenever you started. I looked back at that RawToAces github link you sent ages ago with the camera spectral data, and it is literal gold, what can be done with it is amazing. I really hope I can find such data for more cameras. In MLV App 2.0 (or whatever), we'll definitely have super accurate spectral colour correction ;)

You're welcome! I think the gamut feature will be great for MLVApp. Thanks for all the work on this awesome software!

23
I hadn't considered TPM (TEE / PSP on Arm?)

TrustZone could work too.

Quote
and don't know if Canon ship the right HW feature.

Not current cameras. I meant the next generation of cameras.

Quote
they could simply use asymmetric crypto instead of AES.

Do you mean homomorphic? This would imply adding wifi to every camera, including the cheaper ones...

Quote
Regardless, any change in this area would be months of work for a dev team

I don't know about that. TPM is not that complicated. There's companies with know-how on this, and Canon could simply contract to do the work...

Quote
bricks customer cameras.

Firmware update wouldn't be any difficult or more harmful. They could make a database for the TPM/Trustzone key on their website and make this accessible only for authorized maintainers. The employers could simply get the key based on the camera serial number.

I don't know if any of this will actually happen, but that is a possibility. And, if this happens, this would mean the end of magic lantern... until someone discovers a vulnerability on the TPM module (which has already happen before).

24
Nice to see you working again on gamuts @Ilia3101 !  :)

25
This might be an easier option?

https://sourcehut.org/blog/2019-08-21-sourcehut-welcomes-bitbucket-refugees/

@chris_overseas is right. Sourcehut is awesome. You don't need to pay, you can self host too. If git is too complicated, see also Pijul and Game of Trees

Pages: [1] 2 3 4