Reading 14 bit RAW

Started by ilia3101, March 24, 2017, 09:27:09 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

ilia3101

I am an absolute code noob AND right now I'm trying to decode 14 bit RAW video in C. I have looked at stack overflow, and seems like I'd need to read 7 bytes/56 bits to get 4 x 14 bit numbers, and thats what I'm doing, but its not working and I can't see why not, isn't a Magic Lantern RAW frame a tightly packed sequence of 14 bits? or is there some kind of gaps?
I'm taking 7 bytes, and shifting them like this:
for (uint32_t RAWbyte = 0; RAWbyte < RAWFrameSize; RAWbyte += 7)
{
    image16bit[(RAWbyte/7) * 4]     = ( (RAWFrame[RAWbyte] << 6) | (RAWFrame[RAWbyte + 1] >> 2) );
    image16bit[(RAWbyte/7) * 4 + 1] = ( (RAWFrame[RAWbyte + 1] << 12) | (RAWFrame[RAWbyte + 2] << 4) | (RAWFrame[RAWbyte + 3] >> 4) );
    image16bit[(RAWbyte/7) * 4 + 2] = ( (RAWFrame[RAWbyte + 3] << 10) | (RAWFrame[RAWbyte + 4] << 2) | (RAWFrame[RAWbyte + 5] >> 6) );
    image16bit[(RAWbyte/7) * 4 + 3] = ( (RAWFrame[RAWbyte + 5] << 8) | RAWFrame[RAWbyte + 6] );
}

easier to read:

pixel0 = ( (byte0 << 6) | (byte1 >> 2) );
pixel1 = ( (byte1 << 12) | (byte2 << 4) | (byte3 >> 4) );
pixel2 = ( (byte3 << 10) | (byte4 << 2) | (byte5 >> 6) );
pixel3 = ( (byte5 << 8) | byte6 );

Is there an error in the bit math or is the RAW format more complex?
Here is the output btw:

Close up:

I tried looking at mlv_dump source but its way too confusing for me.
Thanks to anyone willing to explain :)

a1ex

mlv_dump handles all bit depths with the same routine; you may find raw2dng or raw_get_pixel from raw.c (which are hardcoded for 14-bit) easier to understand. Make sure you look at MLVFS as well.

struct raw_pixblock from raw.h may be useful, too. However, bit packing order is not well-defined in the C standard, so... it just happens to work with gcc (and might break with other compilers) [1] [2].

So, to find what shifts to use, the easiest way is probably by looking at disassembly code created by the compiler.

Tip: in ML, there's a shortcut to look at disassembly, either from ML core or from a module: tag any non-static function with DUMP_ASM, then run "make dump_asm".

ilia3101

Thank you! Spent a while being confused looking at the things you suggested, raw_pixblock was most confusing at first, but told me everything I needed. Didn't realise the pixels are ordered in such a pattern, thanks a lot. Do 10 and 12 bit have their own pix block struct? And/or do they just follow the same mirrored kind of pattern?

a1ex

Most likely yes, but didn't try. Here's what I've used for Apertus:


/* two pixels packed as 12-bit (Apertus raw12) */
struct raw12_twopix
{
    unsigned a_hi: 8;
    unsigned b_hi: 4;
    unsigned a_lo: 4;
    unsigned b_lo: 8;
} __attribute__((packed));


Note the bytes in uncompressed Canon raw must be swapped to get valid DNG, at 10, 12 and 14-bit (see reverse_bytes_order in chdk-dng.c), but not at 16.

DNG spec:
Quote
If BitsPerSample is not equal to 8 or 16 or 32, then the bits must be packed into bytes using the TIFF default FillOrder of 1 (big-endian), even if the TIFF file itself uses little-endian byte order.

The 16-bit data provided by Canon (which is really 14-bit padded with zeros) is already little endian.

ilia3101

I see, finally got the swapped bytes all working:
(same shot)

Although my code has ended up pretty much the same as raw_get_pixel at least I understand how the order works now.
Sorry to be bothering you with tonnes of questions, but where could I find information about how to read the MLV headers? I've looked at mlv.h but have no idea how to identify which block its reading, I'm guessing uint8_t     blockType[4]; would contain this info, but what values does blockType[4] have for each block?

bouncyball

Here is all information you need along with mlv.h

ilia3101

Thats exactly what I needed, thanks a lot! surprised I never found it :-[

ilia3101

Hello, I have another noob question about RAW video/processing.

Still doing the same RAW processing code, now reading the headers properly and got the image looking pretty good already with amaze debayer:

(little pink highlight bug to fix)

Next step is colour, how do I use the matrices:
    #define CAM_COLORMATRIX1                       \
     4716, 10000,      603, 10000,    -830, 10000, \
    -7798, 10000,    15474, 10000,    2480, 10000, \
    -1496, 10000,     1937, 10000,    6651, 10000
?

I have almost no idea about colour and colour spaces. Could someone give me a hint on what to do?
Matrixes are necessary for all cameras to match is that right?
Should white balance(channel multiplying) need to be done after matrix is applied?

p.s
I'll release the code soon(weeks maybe couple of months) with GPL, once it is actually useable for people other than me.
Also working on a GUI for it (mac, but maybe other OSes after because processing code is simple C)

Danne

Nice looking. What are you aiming for? You can see cam matrices (color matrix 1 an color matrix 2) in mlvfs sources. Check dng.c.
Multipliers turned into AsShotNeutral tag in your dng file.

ilia3101

Currently aiming to use matrices correctly. Had a look at dng.c in mlvfs source; What is the difference between ColorMatrix1 and ColorMatrix2? are they just different 'looks', or do both need to be applied to RAW data?

That image was without any matrices, just raw colour.

a1ex

This should help: https://rcsumner.net/raw_guide/RAWguide.pdf

The DNG spec also explains what's up with these matrices; Andy600 also has a couple of helpful posts on this.

g3gg0

i had some comments about this in
https://bitbucket.org/hudson/magic-lantern/src/d5915d61349a3bf61011bd0d21fbbaacc86691c4/contrib/g3gg0-tools/MLVViewSharp/DebayerBase.cs?at=unified&fileviewer=file-view-default#DebayerBase.cs-122

basically the camera's coefficients give you a matrix to compute (XYZ) -> (Bayer channels).
you have to inverse that matrix then you have  (Bayer channels) ->  (XYZ).
then you multiply it with a (XYZ) -> (RGB) matrix and you have RGB colors.

all the weird stuff you can see in my source is adding kelvin WB (which happens in cone domain)

maybe it helps a bit
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

@A1ex thanks, that is useful.
@g3gg0 couldn't really tell much from the code, but thank you, useful to know what the matrices actually do

According to that document, ColourMatrix2 contains XYZ to camera conversion, so(as g3gg0 says) that would need to be reversed and used to get RAW -> XYZ, then do XYZ -> RGB... etc
I'm still confused about one thing though, at what point does white balance multipliers need to be applied?/in what space?
The PDF says: raw data -> linearisation -> white balance -> demosaicing -> colour space correction -> brightness and contrast control -> final image
So the camera->XYZ->RGB matrix process happens after white balance?
But how could that be the case? if it is, correcting white balance after colour space correction would be wrong, which makes no sense to me.
Or would it be done before white balance, which "(Bayer channels) ->  (XYZ)" seems to imply - "Bayer" suggests white balance has not been done yet.

TL;DR In what order to do: (Bayer channels)->(XYZ)->(RGB) and White Balancing?

Apologies for not understanding anything.

Edit: @Danne I realised "What are you aiming for?" must have meant in general, well basically an all in one app for MLV thats free and open source and good(maybe) :D

a1ex

Quote from: Ilia3101 on April 30, 2017, 07:03:49 AM
The PDF says: raw data -> linearisation -> white balance -> demosaicing -> colour space correction -> brightness and contrast control -> final image
So the camera->XYZ->RGB matrix process happens after white balance?

That's my understanding. At least, the AMaZE demosaicing algorithm depends on the raw data being white-balanced before running it. It uses this trick to recover high-frequency details and desaturate (hide) the color artifacts.

The simplest way to do white balance is just multiplying the linear raw data with some scalars. After that, demosaic and apply the matrix to convert from linear raw to linear RGB. Note that Adobe's matrices already contain a white balancing component and must be decomposed (at least that's how dcraw does it).

However, proper white balance is a bit more complex, but that's another can of worms:

http://web.stanford.edu/~sujason/ColorBalancing/adaptation.html
http://www.brucelindbloom.com/Eqn_ChromAdapt.html
http://www.brucelindbloom.com/Eqn_RGB_XYZ_Matrix.html
http://ninedegreesbelow.com/photography/xyz-rgb.html

ilia3101

Ok, thanks for clearing up my own understanding/confusion, its weird how it works but good to know.

Yeah, I noticed AMaZE worked better when white balance was done before debayering.

I think I'll stick to simple white balance with multiplication for the time being :)

Digital images need to be white balanced, yet in the real world eyes do it for us.

g3gg0

from what i know, white balance has to be done in XYZ domain, like the comments in source say:
/* RAW --> RAW-WB --> XYZ --> Kelvin-WB --> XYZ --> (s)RGB --> RGB-WB */
CamToRgbMatrix = WhiteBalanceMatrix * RGBToXYZMatrix.Inverse() * xyzKelvinWb * XYZToCamMatrix.Inverse() * WhiteBalanceMatrixRaw;

with:
Matrix xyzKelvinWb = coneDomain.Inverse() * xyzScale * coneDomain;

which means, convert the camera image into XYZ color representation domain, then convert it into a cone domain (e.b. using bradford)
within that cone domain kelvin conversion is quite simple.
then all the way back from cone -> XYZ -> RGB
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

My understanding of white balance: process image so that white/grey in real life, no matter what colour the lighting is, appears perfectly white or grey on the screen.
Of course subject to sensible/creative decisions, sodium vapour just doesn't look right in white.

So technically multiplying channels is one way to do it, although probably not the most correct.

I'll look in to the cone method with bradford. The links from a1ex look promising and I'll look at the c# code once again.

Thanks for enlightening me g3gg0. The can of worms is very opened :(

Also what do real RAW converters do? do they use cone?

g3gg0

once you get used to "this bayer RGB tristimulus" is a vector in some color domain,
there are color domains like the camera's bayer filters, XYZ, cone domain and RGB and
that there are matrices that, when you multiply a color triplet with, convert them from one domain into another,
then you quite understand the magic of all the stuff.

and the source code in C# represents these vector/matrix-multiplications as if you would do them in math.
the final matrix 'CamToRgbMatrix' is merely just a combination of all that weird space/kelvin conversion matrices.
so the result of all setup is one matrix that converts "bayer pixels" to "final RGB".

the only pain in the ass is then understanding "kelvin" white balance.
but in cone domain converting kelvin values is just simple "scaling" the single values.

back then it was fun to learn although i don't really like math.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

bouncyball

Nice conversation guys! :)

@Ilia3101: really appreciate what you are up to, thought about doing this many times but I'm not a close friend of math either :)

br,
bb

ilia3101

I've been getting my head around this stuff today, finally starting to see clearer after your last post @g3gg0. Today just happens to be a very slow day for me(did not sleep). Also I couldn't find much about any colour domain called 'cone', but one comes up on Wikipedia that features both Bradford and eye cone cells - LMS colour space, is that the right one?
@bouncyball  :)

g3gg0

also interesting: https://books.google.de/books?id=suLqBgAAQBAJ&printsec=frontcover&hl=de&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false

i understood the "cone response domain" same as you, some color space which is probably related to cone sensitivity.
and it seems to make kelvin-wb easier to process.

although i didn't dig deeper as it was good enough for my goals.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

An update: I am integrating the RAW processing code (that I've had help with on here) with a user interface finally! Only started the blending yesterday (actually 90% today). Right now, I've only connected the exposure and temperature slider, so no pretty saturated images yet  ::)

Here's a teaser:



Yes, its hurried and would have been a whole lot better even if I'd posted later today, but I'm just overexcited that it's working.

The sliders are weird like that cos the contrast is done like an S-curve. I already regret doing the interface with Apple Cocoa, should have done it in GTK; but oh well. I'll release the code (and app) hopefully by the end of this month/start of July, whenever it works to a basic extent. Also don't know how to go about exporting compressed video, all I know how to do right now is a .bmp sequence which is ridiculous... 24bpp = 1.7x bigger :'(

Danne

Awesome man. Let me know if you need some testing done.

ilia3101

Thanks. Soon ;D

http://www.youtube.com/watch?v=2EFOhRNYMBE&feature=youtu.be

Won't do anymore pointless videos till its done :)
As you can see the current version has a bad case of sony highlights (which was fixed before I liked it to the ui, need to remember how)

ilia3101

Hi Gz, I have another question about reading MLV, this time about resolution metadata.

I know that when MLV is recorded in 50/60fps, the video resolution is 3/5 the real height becuase it uses 3x5 binning/scaling/skipping, or what is called mv720 around here... so it needs vertical streching by 1.66x

I asked for a sample in the MLV App thread, but haven't got any at this time yet, so I found an EOSM sample (with Danne cats :D) in the ML Samplefiles thread, and tried it out (the EOSM is always mv720), and the vertical resolution in RAWI.yRes was 692, and it was also 692/693 when I tried to extract it from RAWI.raw_info.... so how can an MLV software possibly be sure if the image is supposed to be unsqueezed or not?

How can I tell if an image is compressed from the headers? or is the EOSM weird in some way? Do other cameras say unsqueezed resolution in RAWI.xRes and recorded resolution in RAW.raw_info.height, after all the comment on xRes is: /* Configured video resolution, may differ from payload resolution */

Danne

I think metadata is present in later builds. Not exactly sure. If not you can always do the mlvfs route.
Check line 660.
https://bitbucket.org/dmilligan/mlvfs/src/24ebdf591dba0a0431cf1fb674b3a8fcd2730b62/mlvfs/dng.c?at=master&fileviewer=file-view-default

ilia3101

Ah ok, thanks for pointing me to that.

Will try to remember for next time: always look at MLVFS source before asking :D

So it turns out there is no accurate way to detect mv720 :(

This approach will fail in the case of something like: non mv720 footage at 1:2.35 aspect

Maybe I'll include FPS in the equation and allow user to override it.

dmilligan

MLVFS does not check the recorded resolution, it checks the resolution of the video mode (in raw_info struct) only way for this to be wrong is if one can adjust the line skipping parameters for the video mode such as in the crop_rec and 4K branch. That is why there was new metadata added in these branches for precise specification of crop and resolution parameters: http://www.magiclantern.fm/forum/index.php?topic=17021.msg181639;topicseen#msg181639

ilia3101

:o RAWC looks useful.

Anyway normal MLV headers...

So the RAW image's (unstretched) pixel height = frame_headers->rawi_hdr.raw_info.active_area.y2 - frame_headers->rawi_hdr.raw_info.active_area.y1;
Is the value of frame_headers->rawi_hdr.yRes different?
Or are the values equal in all cases?

Basically trying to find out if there's a true/false way of being sure if the clip needs unsqueezing, or does it just have to be guessed from aspect ratio and other known info?

Danne

Not sure what you mean. Either you run by checking aspect ratio info according to mlvfs and end up with:
if(aspect_ratio > 2.0 && rawH <= 720)
or you probably need to check into the later metadata structure(link provided from dmilligan).
The case where mlvfs calculations will be wrong is when crop_rec module is set. The module will tell the exact same aspect ratio as squeezed footage only the footage isn´t squeezed. In switch user have to choose manually. At least for the eosm. Havn´t really gotten to other cams yet :).

Danne

I have a painfully slow connection and shortening MLV files doesn´t keep RAWC metadata so I´m uploading full MLV samples. Meanwhile. Here is metadata from the two samples coming from crop_rec_4k branch. Every crop mode has its own metadata representation. Check RAWC at the bottom. Perfect metadata.

*Sample files at the bottom

mv720(squeezed)
MLV Dumper v1.0
-----------------

Mode of operation:
   - Input MLV file: '/Users/dan/Desktop/MLV_files/test_Ilia/mv720.MLV'
   - Verbose messages
   - Verify file structure
   - Dump all block information
File /Users/dan/Desktop/MLV_files/test_Ilia/mv720.MLV opened
File /Users/dan/Desktop/MLV_files/test_Ilia/mv720.M00 not existing.
Processing...
File Header (MLVI)
    Size        : 0x00000034
    Ver         : v2.0
    GUID        : 13170075225100483085
    FPS         : 59.940000
    File        : 0 / 0
    Frames Video: 209
    Frames Audio: 0
    Class Video : 0x00000001
    Class Audio : 0x00000000
Block: RAWI
  Offset: 0x00000034
    Size: 180
    Time: 1.106000 ms
    Res:  1920x672
    raw_info:
      api_version      0x00000001
      height           692
      width            2080
      pitch            3640
      frame_size       0x00266F60
      bits_per_pixel   14
      black_level      2047
      white_level      16200
      active_area.y1   20
      active_area.x1   146
      active_area.y2   692
      active_area.x2   2078
      exposure_bias    0, 0
      cfa_pattern      0x02010100
      calibration_ill  1
Block: RAWC
  Offset: 0x000000e8
    Size: 32
    Time: 1.119000 ms
    raw_capture_info:
      sensor res      5760x3840
      sensor crop     1.00 (Full frame)
      sampling        5x3 (bin 5 lines, bin 3 columns)




mv720(not squeezed)
MLV Dumper v1.0
-----------------

Mode of operation:
   - Input MLV file: '/Users/dan/Desktop/MLV_files/test_Ilia/crop_rec.MLV'
   - Verbose messages
   - Verify file structure
   - Dump all block information
File /Users/dan/Desktop/MLV_files/test_Ilia/crop_rec.MLV opened
File /Users/dan/Desktop/MLV_files/test_Ilia/crop_rec.M00 not existing.
Processing...
File Header (MLVI)
    Size        : 0x00000034
    Ver         : v2.0
    GUID        : 13157316559061766588
    FPS         : 59.940000
    File        : 0 / 0
    Frames Video: 144
    Frames Audio: 0
    Class Video : 0x00000001
    Class Audio : 0x00000000
Block: RAWI
  Offset: 0x00000034
    Size: 180
    Time: 0.842000 ms
    Res:  1920x632
    raw_info:
      api_version      0x00000001
      height           692
      width            2080
      pitch            3640
      frame_size       0x00266F60
      bits_per_pixel   14
      black_level      2047
      white_level      16200
      active_area.y1   60
      active_area.x1   146
      active_area.y2   692
      active_area.x2   2078
      exposure_bias    0, 0
      cfa_pattern      0x02010100
      calibration_ill  1
Block: RAWC
  Offset: 0x000000e8
    Size: 32
    Time: 0.858000 ms
    raw_capture_info:
      sensor res      5760x3840
      sensor crop     1.00 (Full frame)
      sampling        1x1 (read every line, read every column)



Uploaded*
mv720 sample
https://bitbucket.org/Dannephoto/magic-lantern/downloads/mv720.MLV


Crop rec sample
https://bitbucket.org/Dannephoto/magic-lantern/downloads/crop_rec.MLV


Files are compressed in post so you might wanna decompress the MLV files:
mlv_dump -d -o output_mv720.MLV mv720.MLV

dmilligan

Quote from: Ilia3101 on July 24, 2017, 10:54:22 AM
Is the value of frame_headers->rawi_hdr.yRes different?
Yes. The xRes and yRes are the recorded resolution (user selectable). The raw_info stuff is constant per video mode, and has nothing to do with the actual recorded resolution.

ilia3101

Quote from: dmilligan on July 24, 2017, 01:29:42 PM
The raw_info stuff is constant per video mode, and has nothing to do with the actual recorded resolution.
Aha! I got it now - my interpretation: it's basically like the maximum available sensor resolution in that mode... I think I see how it works now.

@Danne thank you so much for uploading those files! They are really useful to experiment, especially the RAWC files, don't know where else I could have found unusual ones like that. Thanx ;D
And the headers you printed in mlv_dump seem to confirm what dmilligan said, which is nice :)

...downloading the files right now

EDIT:
Are the files losslessly compressed? I get this kind of pattern when decoding both:



(no unsqueezing applied btw)

EDIT 2: Actually read your post thoroughly, sorry missed it before... never mind :-X

Danne

No problem.
Filmed 14bit uncompressed then compressed in post to be able to upload. Mlv_dump for president :).
Just decompress after downloading.
Tell me if you want the other modes from crop rec mode.

BTW. @g3gg0, @a1ex.  Shortening mlv files doesn't include RAWC metadata in the shortened file.

g3gg0

Quote from: Danne on July 24, 2017, 05:58:41 PM
BTW. @g3gg0, @a1ex.  Shortening mlv files doesn't include RAWC metadata in the shortened file.

thanks! will fix it.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

@Danne, (Having trouble decompressing :() I was unable to use mlv_dump to decompress it with what you showed: mlv_dump -d -o output_mv720.MLV mv720.MLV
The output was the same as input, still had the compression. I tried compiling mlv_dump.c from modules/mlv_rec in a recent 10-12bit branch clone, that version still did not work.
I think I need a specific new version of mlv_dump? If so... which one? Should I try mlv_dump on steroids?

Danne

Yes, steroid should work.
Decompression function by the way is probably in crop_rec_4k branch.
@g3gg3. Thanks, looking forward to see RAWC added.

DeafEyeJedi

Quote from: g3gg0 on July 24, 2017, 06:48:23 PM
thanks! will fix it.


Looking forward to it @g3gg0 and much thanks to @Danne for pushing @Ilia3101's project ahead. :D
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

ilia3101

Ok, I'm having another go at implementing camera matrices. When I tried before, my results were all purple, but now I think about it, I realise, the matrix probably transformed the footage to XYZ with some kind of white point adjustment, so when I did the same WB multiplication on that XYZ colour that I do on raw space, it would increase the blue and red too much, which the matrix had already done.

So, does anyone know what temperature/white point the result of matrices 1 and 2 is? Is it 6500k/D65? What is the difference between 1 and 2? I looked at DNG spec, not really sure which one I need, but I know they both convert to raw to XYZ (once inverted)

Also, about this useful thing:
struct cam_matrices {
    char * camera;
    int32_t ColorMatrix1[18];
    int32_t ColorMatrix2[18];
    int32_t ForwardMatrix1[18];
    int32_t ForwardMatrix2[18];
};

Are ForwardMatrix1 and ForwardMatrix2 the same as ColorMatrix1 and ColorMatrix2, but inverted?

I'm almost there, matrix infrastructure in the app is there, just needs to be set up correctly.

(and thanks everyone)

ilia3101

My suspicions are confirmed!

I've mostly figured it out now.

This is the result of using inverted colourmatrix2 and no white balance adjustment:



I know this gross looking crap as XYZ (or SLog :P), but what I don't know is: what is the white balance temperature of this? The DNG spec holds no information other than matrix 1 and 2 will create different temperature images.

Can anyone tell me? I promise this is the last question :P

Also: which function can give me XYZ white balance multipliers from MLVFS dng.c?

dmilligan

DNG only has RGB multipliers for white balance (AsShotNeutral). MLVFS has some functions borrowed from ufraw for computing RGB multipliers from Kelvin, and they are not perfect.

ilia3101

Thanks, I think I've found a solution for multipliers now.

Right now my white balance matrix for raw space is created through this process:
(debayered and black level corrected)raw space {1,0,0, 0,1,0, 0,0,1} -> convert to xyz space (using colourmatrix2) -> convert to cone/LMS space(using CIECAM02) -> WB multiply -> back to xyz (inverted CIECAM02) -> convert to RGB (using matrix from dng.c) -> done

As far as I can tell from all I've learnt from (mostly)g3gg0 and others, this is correct... but it looks horrible.

Red tones don't exist anymore:




The pretty image I showed before looks like this:



As you can see, awful apple photos on the right is even doing a better job - look there is no red highlights in the flower in my app :'(

Is my order correct?

Danne

@Andy600 would know all there is to know about those matrices.
Also check dcraw sources. Only uses one color matrix.
Unified mlv_dump uses that one matrix as well.

g3gg0

@Ilia3101:
dont know if they already do some saturation correction etc. tried loading in e.g. lightroom or daktable without any correction applied?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

I tried other apps, all showed much nicer tones than MLV app with matrices, for now i've disabled them and put the whole matrix thing on the back burner for a little more time. I was looking at old posts in this thread, and realised there's still a little more to white balance than what i was doing...

Quote from: g3gg0 on April 30, 2017, 02:51:30 PM
/* RAW --> RAW-WB --> XYZ --> Kelvin-WB --> XYZ --> (s)RGB --> RGB-WB */
CamToRgbMatrix = WhiteBalanceMatrix * RGBToXYZMatrix.Inverse() * xyzKelvinWb * XYZToCamMatrix.Inverse() * WhiteBalanceMatrixRaw
with:
Matrix xyzKelvinWb = coneDomain.Inverse() * xyzScale * coneDomain;
What's the RAW-WB stage? why is there two white balance steps: RAW-WB and Kelvin-WB?