Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - BatchGordon

Sorry for my late reply!

I understand compress_task() is not the right place to put the code, but as of now being "a convenient place" is all I need since I'm just doing preliminary tests.

As I previously said I'm a java developer and C is not my main language. Furthermore, my previous experience with C (and C++) is more with desktop applications development. So I'm aware I'm not the right person to be involved with these tests, but I want to try it anyway and perhaps, in case of some positive results, someone else could improve my work.

Yesterday I have been able to check a bit more inside the code and I have understood something.
More in detail, I have been able to change the values of part of the image acquired putting this code inside compress_task() right before the call to lossless_compress_raw_rectangle(...):

            unsigned int s = 0; // will get the sample value
            for(int pos_y = 900; pos_y < 1000; pos_y++) { // should be extended from 0 to raw_info.height
               for(int pos_x = 900; pos_x < 1000; pos_x++) { // should be extended from 0 to raw_info.width
                  struct raw_pixblock * p = (void*)fullSizeBuffer + pos_y * raw_info.pitch + (pos_x/8)*14;
                  // get the value of the sample     
                  switch (pos_x%8) {
                      case 0: s = p->a; break;
                      case 1: s = p->b_lo | (p->b_hi << 12); break;
                      case 2: s = p->c_lo | (p->c_hi << 10); break;
                      case 3: s = p->d_lo | (p->d_hi << 8); break;
                      case 4: s = p->e_lo | (p->e_hi << 6); break;
                      case 5: s = p->f_lo | (p->f_hi << 4); break;
                      case 6: s = p->g_lo | (p->g_hi << 2); break;
                      case 7: s = p->h;

                  s |= 0x0FFF; // just an example of sample value changed

                  // write the new value of the sample
                  switch (pos_x%8) {
                      case 0: p->a = s; break;
                      case 1: p->b_lo = s; p->b_hi = s >> 12; break;
                      case 2: p->c_lo = s; p->c_hi = s >> 10; break;
                      case 3: p->d_lo = s; p->d_hi = s >> 8; break;
                      case 4: p->e_lo = s; p->e_hi = s >> 6; break;
                      case 5: p->f_lo = s; p->f_hi = s >> 4; break;
                      case 6: p->g_lo = s; p->g_hi = s >> 2; break;
                      case 7: p->h = s; break;

The code to access the value of the sample is very inefficient (I took the code from the functions to read/write a sample in raw.c and could be ok for a random access, not for sequential) and must be highly optimized (or, better, rewritten) but for now what I find surprising is another thing:
I can extend the process to the full frame of the image and have no delay if I avoid the write of the samples, while keeping the writing I can't extend it much more than 100x100 pixel without becoming choppy (even liveview will be slow).
I mean: it was expected to be hard to do it in realtime, but I don't understand why only the writing portion of the code is taking alot of time. It looks pretty similar to the reading, to me, while in comparison it seems to take like 100x more time to write the sample.
Any opinion about it?
I have done some very basic testing and, unless I have done something wrong (that is possible :P), it could be able to process at the required speed.
Now I need some help from someone who knows something more than me about the code...

As suggested by names_are_hard the code can be put inside "compress_task()", probably just before the call to CreateMemorySuite(). That's where I've put my dummy code, anyway.

So... I suppose I can find the res_x*res_y samples in the data pointed by ptr after the header.
But now i have two questions:
- How much long is the header?
- Are the 14 bit samples already "packed" at this stage or they get packed only after compression and at this stage every sample is a WORD?

Sorry if the questions are silly, but it's my first time looking into the code of ML.
I think the only way to make 8 bits per color usable is by a precise log conversion, or at least some approximation using the Taylor series.
Both solutions I'm sure are computationally too complex.

The idea of ​​the proposed algorithm, the only one simple enough to have any possibility of being applicable, I don't think can be extended to a reduction to 8 bits as the quantization would be too granular.
In any case, if it should work for 10 bits, a test with 8 could always be attempted.
Thank you for all your suggestions and the additional information, they can really give me a better starting point!

I completely agree with you on the doubts about performance: I'm ready to do the work and not end up with something useful, after all this is what experimenting means!  :D
I'm also sure my code won't be as optimized as this task requires, but I'm mostly a high level software developer. Anyway, as soon as possible I will share the results and the code, so someone with more knowledge of me could improve my work.

Thanks again and yes, compress_task() seems to be really the best place to start with the code!
Quote from: names_are_hard on July 27, 2023, 03:43:05 AM
In what way would this be different from the 10-bit mode ML already has?
Right question! As far as I know the current reduction from 14 to 10 bits is done by discarding the 4 least significant bits of the sample. Which is reasonable, but not optimal.
A color depth of 10-bit linear is sufficient for light grading, but many modern cameras work at least with a 10-bit log profile, which while not raw recordings, has better shadow behavior.

For the speed problem in the videos... I also have doubts that it can work in realtime, furthermore I think I'm not able to write code optimized for the processor, but the algorithm I found looks very interesting and, computationally, very light:

Log conversion should be done before lossless compression and could benefit this too by allowing for a higher level of compression.
Of course, a de-log is also required after decompressing and before demosaicing in MlvApp.

Does anyone else (besides me) find the hypothesis at least theoretically feasible? ::)

Quote from: reddeercity on July 27, 2023, 07:14:19 AM
Will i have some experimental code for the 5D2 D4 back when raw video was just a thing starting and not easy recorded by 5D2
(5D3 wasn't even out yet , so the 5D2 was the best at the time with no equal)
always i have code that "Hard Coded" 12bit from 14bit  in camera for raw video so no hardware (encoder chip) like there is now.

Thanks so much for the code!
It appears to take a similar route to the old raw module (before mlv_lite) avoiding the hardware chip to do the encoding.
I don't know if we can still intercept the image samples in mlv_lite while keeping the hardware encoding functionality or not.
Going back to the old route would likely mean a performance and functionality loss that could be hard to accept.
Anyway, I'll let you know if I need the full code. Thanks again!
Thank you!
So I suppose there's no way to do a 14 to 10 bit sample reduction by some very light computation?
I was thinking about trying to compress data with a log approximation using just a few bitwise operations.
Probably it still cannot be done in realtime, but if you know where I could put the code I would still like to try it.  :D
I'm looking at the source code but I cannot find the routines for the lossless compression.
Is the module mlv_lite using the lj92 code from mlv_rec or is it just calling internal functions of Canon firmware that we can't modify?
Is there a way to disable entering ML menu when doing a two-fingers tap on the screen?
Sometimes I enter it by mistake just taking my eosm.
About the crowdfunding, maybe someone will not agree, but why not impose an upper limit on the money offered by each individual person, perhaps around 15/20 dollars?

I don't want to be misunderstood, I certainly don't intend to devalue the work done by Bilal, which I consider extraordinary and incredibly superior to what he asks in return.
I only believe that crowdfunding is good when there is broad participation, in which many people can and are in some way encouraged (if not "forced") to contribute.

By the way, last time, when I saw the subscription, it was already closed thanks to a few generous users. This time I really want to contribute.  :)
I haven't done extensive testing, but I've discovered a very small problem: when you set the ISO value from the Expo menu to AUTO, you can't even get into the ML menu anymore. You have to avoid loading ML and set it again to a different value from the Canon menu.
It's a really small bug, probably no one is using auto iso in ML (I don't use it myself, I set it by mistake), so it may be a lowest priority problem to fix.
Nice to know I am not alone!

Quote from: gabriielangel on February 03, 2023, 04:50:35 PM
The shutter button behaves a lot better than last year, where it were a lot more sensitive, but I also press it too far every now and then.

Yes, it definitely behaves alot better. I will try your suggestions this weekend, thank you very much!

As soon as I get some free time I'll starting looking too, but I don't have big hopes since you know the code so much better than anyone else!
The latest changes appear to be a big improvement over the version from just a month ago!
The shot preview is pretty much usable in my opinion, so even with the highest resolutions I don't have to be blind frame anymore!
Danne, you did a great job once again!  ;)

I have other cameras, but I honestly find using Magic Lantern easier and more rewarding than other companies' native shooting functions.  :)

Only one detail bothers me, and that's the shutter button. Isn't there a way to block the photo taking functionality?
Personally I only use the camera for video and sometimes it happens that I press the button too much while focusing, ruining the shot.
Does this happen to anyone else or am I the only wimp?  :P

Yes, the idea of comparing points between images and taking the one from the low iso image if the difference exceeds a threshold might be naive... but in my opinion it's also quite wise.  ;)

The choice of the threshold value can be critical and, as you said, it can be different depending on many things (especially the selected ISO values).
However, I still have the impression that the root of the problem may be exposure for video: ETTR should be applied for the high iso, not the low one.

Technically speaking, with dual ISO we gain more detail in the shadows, not in the highlights (I said technically because in post we can change exposure as we prefer).
That's why a1ex reduces the exposure of the high iso image before merging, instead of increasing the exposure of the low iso.

By the way, his blending algorithm is much more complex and sophisticated, but I suspect it's also too slow to be usable for video. That's why it has been simplified in the implementation.
Quote from: iaburn on January 25, 2023, 09:37:40 PM
Activating "Fullres reconstruction" improves the resolution on the shadows but can break highlights, so I will try to add an option to do fullres reconstruction only on the shadows for these cases

From the documentation of the a1ex dual-iso algorithm, full frame reconstruction should only be possible for midtone and impossible for both highlights and shadows.
This is because only the midtones fall into the "sensitive" part of both the low and high ISO lines.

Being a 5.2k, this new setting has so much vertical resolution that dual-iso is no more so much penalizing in terms of loss of detail.
Danne and Bilal did a wonderful job!
Quote from: ilia3101 on July 06, 2022, 10:57:02 AM
My goal is to create digital image processing that handles bright colours smoothly. It's possible. Film does it. It's what I've been working towards for the past two years.

That's a great goal.
Color science (including color smoothness and highlight rolloff) can make more difference in the beauty of an image than the dynamic range and much-much-more difference than the resolution (personally I don't care too much of resolution unless it's less than 720p).

About this, are colors in MlvApp treated like in the color science of Arri Alexa (or Amira), with color saturation limited to a certain value like in watercolors?
The latest releases are really great, thanks to Danne!

I don't have suggestions about shortcuts for the buttons INFO and SET, but about the gain button, there's a limitation that perhaps can be avoided.
When dual iso is enabled the gain buttons gets always disabled.
That's perfectly understandable if the gain buttons are set to control iso value, but maybe if set to "aperture only" they don't need to be disabled.
I could be missing the whole point, anyway...
Raw Video / Re: AI algorithms for debinning
July 20, 2021, 02:45:23 AM
About a better way to manage debinning... I had an idea for a custom debayering (some months ago) that could give us a bit more horizontal resolution, at the expense of some vertical resolution.
It could work but... the problem is that it needs the green pixels to be binned in a "shifted" way between lines, as shown in the following post:
as you can see the middle green pixel of the binning group of one line falls right between two groups of the following one (just forget the line skipping of the picture, it's not our case for 5k anamorphic). Substantially, it's a three pixels shift.

But... after some checking and tests I can confirm that, at least on the EOS-M and the latest ML version of Danne, the green pixels are binned with a single pixel shift between the line and the following one. So my idea cannot work on the current recorded material.
In case someone knows how to change the shift in binning (I played with registers with no success)... there could be a chance to test it, otherwise I think there's no way to improve the current quality.
Thanks Levas!
I already entered that page but I missed the module from the list. Now I got it!
I'm gonna install the module following your suggestions for doing the tests.  :)
As soon as I am ready, I'll share my results here and I would be pleased if you and the other experienced guys can comment on them.  :-\
I'm doing some tests and I think to have found something interesting about the binning...
To double check it, I would like to play a bit with some settings to the the cmos registers.

Since I'm using the latest eos-m Danne release and the Adtg module is not included, is there an alternative way to adjust those registers without recompiling ML?
I've seen similarly called registers inside the page: Movie -> presets -> advanced
Can anyone confirm if they are exactly the same?

Yes, I already read that post. It helped me to understand how the binning is done in these cameras. And I have seen other people had almost my same idea.

What you say is absolutely true, we cannot restore the information that is lost with the binning.
But still I think an unbinning process is needed before doing the demosaicing: the value for the unbinned pixels won't be the original one, just an estimation from the nearest ones.

My opinion is not that we can restore what is lost during binning, but that applying a demosaicing algorithm made for unbinned pixels won't give best results on a binned image. In other words, I think we are losing even more detail than what the binning process itself implies.
It's absolutely just an opinion and I could be wrong, but I would like to test it.

P.S.: I have seen your contribute on many parts of the project and I really consider it impressive!
Lately, I am studying by myself the Bayer filter and some demosaicing algorithms.
I think those included in MlvApp are great and the results are generally stunning.
What I'm not sure about is the correctness of the common way to proceed with videos recorded using the 1x3 binning pattern.
In my opinion, demosaicing the video then doing a horizontal stretch (or a vertical shrink) is going to lose part of the information captured with the raw image.
Personally, I would first expand the raw image unbinning the pixels with an ad-hoc algorithm, then I would apply to the resulting raw one of the existing algorithms (e.g. Amaze).
I could be wrong, but if you look at the pixels that are binned... they are spatially in a slightly different position compared to a normal Bayer filter.
I have an idea about how to do the unbinning, but I would like to have some suggestions on where to make the changes to the code.
BTW, I am a software developer, so my problem is not in coding, but not all the project code is clear to me.
If someone can help proceed with the test, that would be great.
I just asked because it seemed to me that I tried that resolution (in frtp) with a previous build and so I thought I now was not able anymore to set it up correctly.
Hello, I need some help!  :(
I'm unable to get the resolution 1504x1782 (1x3) starting from the 5K anamorphic frtp preset.
It should be possible since it has been described here: and I was able to get it with older releases.
I'm currently using Danne build of 27 October.

Another question: how to disable the function to take a picture/snapshot while in movie recording mode?
Thank you for any suggestion.

Uhmmm... I'm getting some thin vertical stripes over the realtime preview in this newest anamorphic  2.35 AR, but only during recording.
Still a great job!