Apertus MLV Format Discussion

Started by ilia3101, September 12, 2019, 03:34:37 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

ilia3101

As we know, the Apertus project will be making some improvements and extensions to the MLV format. To add features such as support for non linear raw data for example...

So this can be the place where Apertus and Magic Lantern can discuss any changes that will be made to the MLV format. Anyone can contribute or just follow the discussion.

I was talking to Fares on the Apertus Telegram group and we decided it is a good time to start a thread.

Quote from Fares:
Quotewe need to spend quality time with ML community and Axiom community to build next version of MLV to be generic as possible while maintaining the goal of the format -to be very computationally friendly on the camera side-

Luther

Apertus has some freedoms ML doesn't have. For example, using Zstandard instead of LJ92 could possibly offer smaller sizes, but would require FPGA programming, which ML can't do. Or expansion of the metadata to include more accurate color related informations (spectral data, for example). Or add Cooke /i metadata for cinema lenses.
So in order to maintain compatibility, Apertus will have to be limited to what Canon cameras can do. I don't think this is a wise step, as Apertus can go much further/better due to the nature of not being limited by any company/hardware...

ilia3101

They are going to use LJ92. But I think it's ok to add stuff canon cameras can't do, as long as it stays optional. For example, the Axiom sensor can shoot non linear raw for extended dynamic range, and for that they want to add a block to describe the curve.

Quote from: Luther on September 12, 2019, 04:34:50 AM
Or expansion of the metadata to include more accurate color related informations (spectral data, for example).

You have read my mind! This was actually going to be one of my main suggestions, I want expanded colour metadata. The current raw_info structure only has space for one camera matrix. I want there to be a better way of adding colour data like matrices, as well as a way to include spectral data for the sensor (optionally of course).

Fares Mehanna

Thank you Ilia for opening the thread.

Sorry for my late replay. I was revise my knowledge about MLV.

I was working in the last 3 months in GSoC, I was working in implementing LJ92 in the FPGA for axiom beta and micro. I have a good knowledge about MLV during and before GSoC.

Luther is correct, Axiom can build its own hardware so possibilities are endless, but still our main priority is to be friendly as possible in the receiver end, since any heavy processing could limit the FPS.

Side info: currently, Axiom cameras do not store footage internally, so RAW data will be transferred and stored in a recorder. a computational friendly format will allow the recorder to use less power and to be more compact while handling more FPS.

Anyway, I will start by pointing to a few things Axiom currently need.

1. how bayer pattern is represented in the frame, for example if the sensor bayer pattern is RGGB, the data can be represented in the frame as
R G R G R G R G ...
G B G B G B G B ...
or
R G G B R G G B ...
R G G B R G G B ...
Currently Axiom frames are represented in the second style which is not supported in MLV.

2. HDR modes in Axiom beta include three modes
- Even and odd lines with different exposure times - not ISO as ML do.
- PLR mode which allow non-linear Raw data.
- In the future Axiom can utilize its high fps and global shutter to merge several frames together with different gain or exposure times.

The current "mlv_diso_hdr_t" expect dual iso only, but we need more information in all the three modes.
You can check Supragya Raj work to understand more about PLR.

3. It is not currently used in the RAW pipeline but it might be used in the future, log encoded pixels instead of linear values, check the work done on this area here Log Curve Simulation Paper, it might be implemented as alpha, beta and gamma or as LUT.

I believe that is what needed to currently store the RAW data out of the camera.
but in the future I think we might need much more possible ways to describe data in the frame, like if we are storing each channel separately, or if we slice the frame horizontally or vertically into several slices. those tricks can help in the FPGA side.

Those are what in my mind right now, I will keep researching in possible areas that we might need and I will also post this thread in Apertus irc so interested developers could contribute.

Quote from: Luther on September 12, 2019, 04:34:50 AM
using Zstandard instead of LJ92 could possibly offer smaller sizes, but would require FPGA programming

Interesting point, I was working in LJ92 since it is supported in MLV and DNG, and I did not really know about Zstandard, the concept of Asymmetric numeral systems is interesting. maybe someone in the future or me going to implement it (:

rexorcine

Is there anything anyone reading can think of that MLV should or shouldn't be doing besides what's already been described?
. . .

ilia3101

Other than the non linear block, not much else I can think of myself. This thread should be useful if anything comes up though, I'm assuming there must be something.

And as MLV is getting upgrades, we may as well add a couple of other things. Such as better colour metadata - in the case of AXIOM where the sensor is upgradable and could be almost anything (in theory), so it is important to be able to specify colour info, at least a matrix or two.

And I think it would be nice to add the ability to include spectral data, as that may be available for some sensors too.

More from Fares:
QuoteFares Mehanna, [12.09.19 00:11]
Hi Ilia. As far as I can remember the way you handle the color specs matrices currently is hardcoded in MLV App for canon cameras, so I think it is a good idea to add it to MLV format to generically handle different sensores without hardcode them.

Fares Mehanna, [12.09.19 00:14]
The other area that needed some work in MLV format is how the sensor data is arranged in the frame, for example Axiom store frame data as RGGB in the same line, and in the future it can encode every channel alone. so despite the fact that the sensor is RGGB pattern, the data in frame can be in different order than the normal  R G R G R G .... \n G B G B G B ...

Edit: his post has finally come through here!

ilia3101

Ok how about some initial ideas to get the ball rolling.

Quote from: Fares Mehanna on September 13, 2019, 04:23:23 PM
1. how bayer pattern is represented in the frame, for example if the sensor bayer pattern is RGGB, the data can be represented in the frame as
R G R G R G R G ...
G B G B G B G B ...
or
R G G B R G G B ...
R G G B R G G B ...
Currently Axiom frames are represented in the second style which is not supported in MLV.

So 2x2 RGGB blocks of pixels are stored one after another?

Could create a "BAYR" block, something like this:


typedef struct {
    uint8_t     blockType[4];    /* Bayer */
    uint32_t    blockSize;
    uint*_t    arrayType; /* 0=Bayer, 1=Black and white, 2=Possible other patterns... */
    uint*_t    layout; /* 0=sensor order, 1=in groups, 2=channels stored separately, 3= ... */
}  mlv_bayr_hdr_t;


This would have to be optional, and when it is not detected in an MLV file, it should be assumed that the pixels are in the normal format, so compatibility with old files remains.

I have not accounted for many cases of how the pixels could be stored, I'm sure with the FPGA there will be quite a few possible configurations.

Another possible solution is a block that actually describes the layout in some numeric way, so it could be any arrangement/order at all, and the software reading the file would have to take care of that. I think that would be too complicated though, but if the FPGA code is likely to be changing the layout of pixels a lot this would be better, also more future proof.





Quote from: Fares Mehanna on September 13, 2019, 04:23:23 PM
2. HDR modes in Axiom beta include three modes
- Even and odd lines with different exposure times - not ISO as ML do.
- PLR mode which allow non-linear Raw data.
- In the future Axiom can utilize its high fps and global shutter to merge several frames together with different gain or exposure times.

The current "mlv_diso_hdr_t" expect dual iso only, but we need more information in all the three modes.
You can check Supragya Raj work to understand more about PLR.

3. It is not currently used in the RAW pipeline but it might be used in the future, log encoded pixels instead of linear values, check the work done on this area here Log Curve Simulation Paper, it might be implemented as alpha, beta and gamma or as LUT.

I believe that is what needed to currently store the RAW data out of the camera.
but in the future I think we might need much more possible ways to describe data in the frame, like if we are storing each channel separately, or if we slice the frame horizontally or vertically into several slices. those tricks can help in the FPGA side.

Supragya Raj came up with this block for PLR:

struct mlv_cmv12kplr_hdr_t {
    uint8_t blockType[4];   // "PLR_"
    uint32_t blockSize;
    uint64_t timestamp;
    uint32_t expTime;       // Contains the exposure time Texp by sensor
    uint8_t numSlopes;     // 1, 2 or 3
    uint32_t expKp1;         // Contains the first exposure time in PLR, invalid if numSlopes = 1
    uint32_t expKp2;         // Contains the second exposure time in PLR, invalid if numSlopes = 1 or 2
    uint8_t vtfl2;                // Hold voltage, invalid if numSlopes = 1
    uint8_t vtfl3;                // Hold voltage, invalid if numSlopes = 1 or 2
}


It would work, but it seems completely specialised to this one specific model of sensor, if we keep adding blocks for features for specific sensors, in 100 years time mlv_structures.h will grow huge, and MLV reading software and libraries will need to have a lot of special cases to handle all of these sensors.

A more generic option could be a lookup table block for conversion of the non linear data to linear values (16 bit in this example). Something like this:

typedef struct {
    uint8_t     blockType[4]; /* this block contains a lookup table to get linear values */
    uint32_t    blockSize;    /* total size */
    uint32_t    lutLength;    /* lookup table length, should be equal to 2^bitdepth */
    uint16_t    lutData[lutLength];
}  mlv_curv_hdr_t;


This could handle PLR, LOG, gamma, HLG... any kind of encoding. This block would take up around 32KiB for 14 bit images, 8KiB for 12 bit and 2KiB for 10 bit, pretty good for a LUT.

A more crazy option: if lookup table is seen as too big, a string expression could be stored instead, which must then be evaluated to linearise the image (or generate that lookup table). This would be annoying for people implementing MLV support.





Quote from: Fares Mehanna on September 13, 2019, 04:23:23 PM
- Even and odd lines with different exposure times - not ISO as ML do.

So this is like dual ISO, except every other line has different ISO, instead of every two lines in ML?

For this we could evolve the DISO block, come up with new values for the dualMode field.

Luther

I have zero experience with format design, but here's some ideas. If something sounds too dumb it probably is, so just ignore it:
- Better compression (already being worked by Fares - thanks!)
- More accurate color informations (already described by Ilia)
- Metadata for cinema lenses (Cooke /i seems to be the new "standard" in the industry, even Arri is adopting it IIRC)
- MXF support
- AES256 encryption
- Support for other audio codecs? (FLAC/Opus)
- SDK with permissive license (BSD or ISC) and documentation, so others can easily add support in their software/hardware.
- Embedded darkframe/flat-field (?)

ilia3101

Quote from: Luther on September 17, 2019, 05:44:05 AM
- More accurate color informations (already described by Ilia)
I support this a lot. A block for matrices (so it's possible to include ones for different illuminants) and a block for spectral data is all I want.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- Metadata for cinema lenses (Cooke /i seems to be the new "standard" in the industry, even Arri is adopting it IIRC)
If theres an open specification or it's simple to understand then it would be nice to add, not as a priority though.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- MXF support
I recently tried to work with MXF (bmxlib) and it seems like a horribly bloated container format that deserves to be forgotten. It's like MLV but super bloated with too many codecs and fearues.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- AES256 encryption
I think unlikely, as if you need to encrypt things you might as well encrypt the whole file.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- Support for other audio codecs? (FLAC/Opus)
Unlikely to happen I think, as audio takes up little space compared to video frames anyway.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- SDK with permissive license (BSD or ISC) and documentation, so others can easily add support in their software/hardware.
I want to do this and I think it's very important.

Quote from: Luther on September 17, 2019, 05:44:05 AM
- Embedded darkframe/flat-field (?)

I like the idea of an embedded dark or flat frame very mucht. Could be done by reusing the VIDF block, but just put "DARK" of "FLAT" in the header instead of "VIDF"

g3gg0

Quote from: Ilia3101 on September 17, 2019, 10:13:51 PM
I support this a lot. A block for matrices (so it's possible to include ones for different illuminants) and a block for spectral data is all I want.
matrices are supported, just not really used
edit: not "multiple matrices", just one

Quote from: Ilia3101 on September 17, 2019, 10:13:51 PM
I think unlikely, as if you need to encrypt things you might as well encrypt the whole file.
yep, and cannot be done from within the camera

Quote from: Ilia3101 on September 17, 2019, 10:13:51 PM
I like the idea of an embedded dark or flat frame very mucht. Could be done by reusing the VIDF block, but just put "DARK" of "FLAT" in the header instead of "VIDF"
see DARK and FLAT frame specification ;D
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

Quote from: g3gg0 on September 17, 2019, 10:39:22 PM
matrices are supported, just not really used
edit: not "multiple matrices", just one

I have noticed the matrix field before, but yeah it's not really used. It is ok right now, as very few cameras use MLV. MLV App has all the matrices built in for these few cameras, both tungsten and daylight.
But with Apertus there is possibility for MLV to come from any sensor, so more colour metadata should be added. Do you think that's reasonable?

Quote from: g3gg0 on September 17, 2019, 10:39:22 PM
see DARK and FLAT frame specification ;D

Ha didn't realise :D

DeafEyeJedi

Mad props to @Ilia3101 for starting this thread. Absolutely vital for us to stay on top in order to be futureproofed. Great call!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

g3gg0

Quote from: Ilia3101 on September 17, 2019, 11:40:51 PM
But with Apertus there is possibility for MLV to come from any sensor, so more colour metadata should be added. Do you think that's reasonable?

which metadata is really needed?

a) do you want to map tristimulus values to another 3d coordinate system? like 3d luts do?
probably the most complete solution, but do we ever get such data for sensors?

b) something like polynomials?

c) simple matrices as we have now?


maybe the solution a) is the most flexible. but worth the effort?

afair a1ex once had a very deep analysis why we always get weird purple-ish fringing around close-to overexposed blue areas.
this happened even with open source software because they all used the adobe camera raw matrices if i remember correctly.
and on top of that, it wasn't possible to just "fix the matrix" because you would always get into this trouble with a simple, linear scaling matrix.

maybe for this using kind of cube luts for bayer->XYZ conversion and then storing XYZ into DNGs, using a identity matrix as conversion matrix for the debayering.
hmmm. will this work?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

ilia3101

Quote from: g3gg0 on September 18, 2019, 10:14:59 PM
which metadata is really needed?

a) do you want to map tristimulus values to another 3d coordinate system? like 3d luts do?
probably the most complete solution, but do we ever get such data for sensors?

b) something like polynomials?

c) simple matrices as we have now?


maybe the solution a) is the most flexible. but worth the effort?

I would say it is worth doing some form of a) and c). I think for C we should have a special block that can include the two default matrices - or more, because these matrices seem to be the main method used in almost all software. And for a) I was thinking of adding a block to include camera sensor spectral data.

Something like:

typedef struct {
    uint8_t     blockType[4]; /* This block contains spectral data */
    uint32_t    blockSize;    /* total size */
    uint32_t    dataLength;    /* lookup table length, should be equal to 2^bitdepth */
    uint32_t    dataStartWavelength;  /* What wavelength the data starts at */
    uint32_t    dataInterval;  /* Difference in wavelength for each data point */
    uint16_t    spectralData[dataLength]; /* Spectral data, R, G and B */
}  mlv_spec_hdr_t;


I know not many cameras have this data available, but it is worth including, for software that has the ability to use it. Probably very little or no software does right now, but I hope to have a demonstration of using spectral data soon. It can be used to generate a 2D look up table (in xy space for example) that corrects small shifts in chromaticity and luminance, not perfect at all, but should be better than just a matrix. This LUT coule be generated on demand for any illuminant - fluorescent, tungsten, daylight or whatever. Spectral data could even be used to generate matrices, so I think it is a very futureproof feature to add to MLV.

Data for a few cameras is available here: https://github.com/ampas/rawtoaces/tree/master/data/camera, I also found another database recently with a few older EOS cameras.

I think spectral image processing methods are going to grow in popularity, as it seems like a good step forward. So this kind of data will hopefully become more common in the future.



Quote from: g3gg0 on September 18, 2019, 10:14:59 PM
afair a1ex once had a very deep analysis why we always get weird purple-ish fringing around close-to overexposed blue areas.
this happened even with open source software because they all used the adobe camera raw matrices if i remember correctly.
and on top of that, it wasn't possible to just "fix the matrix" because you would always get into this trouble with a simple, linear scaling matrix.

I haven't experienced these artifacts in my own videos, but maybe I don't look enough. Most recently I saw it in an eos m video, I do see them quite often in other people's videos for sure. They are a little bit strange. Does actual Adobe software have them?

Quote from: g3gg0 on September 18, 2019, 10:14:59 PM
it wasn't possible to just "fix the matrix" because you would always get into this trouble with a simple, linear scaling matrix.

Yep :)


Quote from: g3gg0 on September 18, 2019, 10:14:59 PM
maybe for this using kind of cube luts for bayer->XYZ conversion and then storing XYZ into DNGs, using a identity matrix as conversion matrix for the debayering.
hmmm. will this work?

You mean converting bayer data to XYZ before it has even been debayered? wouldn't that require using the pixels around it? which it quite close to actual debayering so it would probably cause those artifacts. Or do you mean converting bayer data to XYZ right after debayering, then storing in to DNG in XYZ with three channels? I'm not sure either way is possible. But I know very little about DNG.

Kharak

I don't want to hijack this thread. The future of MLV is very interesting.

I think secure encryption could be a backbone in the MLV format.

@ilia3101

Can you link some examples of the effect spectral data can have on an image? Before and after? And some paper on it, I would like to know more of this.
once you go raw you never go back

Luther

@Kharak you're going into a deep rabbit hole here :)
I'm not Ilia, but I think I can answer your question...
The wiki page has a general introduction:
https://en.wikipedia.org/wiki/Spectral_sensitivity

Each pixel in the camera sensor has a different response curve (sensitivity) to different wavelengths in the visible spectrum. For example, pixel 1x1 will have sensitivity X in the wavelenght of 445nm. And so on.
This data is essential for processing images, because in order to process these images you need a reference of how the camera 'holds' these color informations. This data will then be converted to a color space.
In the case of ACES, you need spectral sensitivity data to create the IDT (Input Device Transform). This IDT will "tell" ACES what are the colors and then transform them to the ACES color space (preferably AP1). You'll do your color grading steps (preferably in the "log-like" ACEScct in 32-bit float) and then ACES will need to transform again this gigantic color space to something the display can actually reproduce. Displays we have today are very limited in terms of color and dynamic range, they can only reproduce a small portion of the AP1 color space. That's why you need to reduce it to something like Rec.2020.

The Apertus team can probably talk to their sensor producer and get this data directly from them. That's the good news.
The bad news for ML is that Canon does not release their spectral data of their DSLR/Mirrorless cameras for the public. So developers like Ilia have to hard-copy simple matrixes from other companies like Adobe.
To solve that we would need to find this value ourselves in some way. This research shows some methods. But these methods are very costly and mostly restricted to industry/academy (even though the prices seems to be dropping). It might be doable to someone richer than me (and who working with photography is rich anyway? :) ). I was doing some research other day and found this Lighting Passport (spectrometer), for example. He is precise and cheap. Don't know about how much monochrometers cost. Don't know if the light source also influences the precision but, if it does, a 95+ TLCI light will be required (probably with temperature as close as possible of CIE D60). There's also the issue of camera lenses, I think. Each lens has a suboptimal MTF, color rendition and can shift the WB because of their coating (some companies even use this as a "look", such as Cooke).

ilia3101

@Kharak I don't have anything to show yet. Also it's not necessarily an effect, but it could be used for some effects too.

Quote from: Luther on September 20, 2019, 01:16:29 PM
In the case of ACES, you need spectral sensitivity data to create the IDT (Input Device Transform)

Not necessarily, you can convert to ACES from camera colour with a matrix the same as for any other colour space, it's just not the best solution. Matrices are never a perfect solution, as no camera sensor comes close to meeting Luther's condition (maybe you invented it :D). To meet this condition the sensor's spectral R/G/B responses would all need to be linear combinations of the XYZ functions (which also means a linear combination of the eye's cone cell responses) - in this case a matrix would actually be the most perfect solution. But no camera is that good.

Quote from: Luther on September 20, 2019, 01:16:29 PM
The Apertus team can probably talk to their sensor producer and get this data directly from them. That's the good news.

This would be really great. Or we could eventually get it measuered if we find someone with the equipment.

Quote from: Luther on September 20, 2019, 01:16:29 PM
The bad news for ML is that Canon does not release their spectral data of their DSLR/Mirrorless cameras for the public. So developers like Ilia have to hard-copy simple matrixes from other companies like Adobe.

There's some data for a few cameras but yeah. I saw research that showed a method to figure out spectral response from a photo of a really big colour chart, so there is some hope.

I will try to stop filling the thread about colour stuff to give room for discussion about the more important Apertus related features. Hopefully coming soon as Apertus begin implementing MLV.

Luther

Quote from: Ilia3101 on September 20, 2019, 07:37:27 PM
Not necessarily, you can convert to ACES from camera colour with a matrix the same as for any other colour space, it's just not the best solution.

ACES is a color management, not only a color space. You can sure convert to ACES2065-1, but using ACES is more than just that.

Quote
(maybe you invented it :D)

What do you mean?

Quote
To meet this condition the sensor's spectral R/G/B responses would all need to be linear

Exactly the opposite of that.

ilia3101

Quote from: Luther on September 20, 2019, 08:01:52 PM
ACES is a color management, not only a color space. You can sure convert to ACES2065-1, but using ACES is more than just that.

There's a lot to ACES and I know very little about the system as a whole, but some IDTs are just based on a matrix. The rawtoaces uses spectral data, but it only uses it to generate a matrix. And the default option in rawtoaces is still based on the adobe matrices.

Quote from: Luther on September 20, 2019, 08:01:52 PM
What do you mean?

Well it's called Luther's condition :D

Quote from: Luther on September 20, 2019, 08:01:52 PM
Exactly the opposite of that.

What do you mean? am I wrong? did you mis read what I said? Meeting Luther's condition means each of the sensor's channel responses are a linear (summed) combination of the X Y and Z functions, does it not?

Kharak

Thank you Luther and Ilia.

My word choice was not the best. I think i meant "affect of".

Anyways, a better more true color responce sounds amazing.



once you go raw you never go back

Luther

Quote from: Ilia3101 on September 20, 2019, 08:07:21 PM
it only uses it to generate a matrix.
To generate a *accurate* matrix. This is not the case of the matrices we have, which we don't even know their process of acquisition. Also, I think the Light Iron did something different than that for the the Panavision DXL2 (these informations are not public, though).
Quote
Well it's called Luther's condition :D
I was trying to explain to @Kharak what spectral sensivity means. I don't think we need this level of precision in photography/video, but some scientific imaging do require it.
Quote
linear (summed) combination of the X Y and Z functions, does it not?
In the sense of sum of X+Y+Z, this is linear. But the image generated will vary for each wavelenght because of the light source and camera dynamic range. In this sense it's not linear. See CIECAM02, for example.

ilia3101

Quote from: Luther on September 21, 2019, 06:11:50 PM
To generate a *accurate* matrix. This is not the case of the matrices we have, which we don't even know their process of acquisition. Also, I think the Light Iron did something different than that for the the Panavision DXL2 (these informations are not public, though).

Well the matrices we have are by Adobe so their probably good, andy600 says they are good. I wonder what method rawtoaces use to caulculate their matrix. I can think of a few different approaches (other than just making the mean error as small as possible). I'm going to ask them. Also I was wrong, rawtoaces does actually use spectral data by default for conversions, not the Adobe matrices, not sure where I got that wrong idea that from.

Quote from: Luther on September 21, 2019, 06:11:50 PM
I was trying to explain to @Kharak what spectral sensivity means. I don't think we need this level of precision in photography/video, but some scientific imaging do require it.

We'll never get much precision with normal cameras anyway, but it's worth trying things with spectral data other than matrices.

Luther

I think these links might be of interest for you Ilia or someone from Apertus, so I'll just leave here just in case we come back to this discussion in the future.

"Adventures in Spectrometry" - by Roger Cicala:
https://www.lensrentals.com/blog/2018/03/a-geek-of-many-colors-adventures-in-spectrometry/

"Looking at Cine Lens Color Shifts Using Spectrometry" - by Roger Cicala:
https://wordpress.lensrentals.com/blog/2018/04/looking-at-cine-lens-color-shifts-using-spectrometry/

"The absolute sensitivity of digital colour cameras" (PDF):
https://www.osapublishing.org/oe/abstract.cfm?uri=oe-17-22-20211

Spectron:
Quote
Spectron - an open source project for measuring and obtaining digital devices spectral sensitivity curves
Initially this was conceived with the goal of constructing an automated device with an aid to accurately measure spectral sensitivity curves for digital camera sensors. The project has a broader use though and can be used to measure spectral sensitivity curves of various light sensitive sensors (not just camera sensors) - for example spectral sensitivity curves of photodiodes.
https://github.com/Alexey-Danilchenko/Spectron

Spectral Data Analysis:
https://github.com/brandondube/raynbow
http://brucelindbloom.com/SpectCalcSpreadsheets.html

rexorcine

. . .

DeafEyeJedi

Absolutely spectacular reading within from these links shared by @Luther. Thanks so much for your inputs. Much appreciated!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109