[IMPOSSIBLE] dual ISO H.264

Started by jagnje, November 02, 2013, 06:19:13 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

apefos

I just sent an email to ginger hdr support, hope they listen to me... imagine ginger hdr workflow for this... it would be a breeze...

(in the momen I sent the email to ginger the views in this topic was: (Read 1920 times). Yes, interesting coincidence!

N/A

7D. 600D. Rokinon 35 cine. Sigma 30 1.4
Audio and video recording/production, Random Photography
Want to help with the latest development but don't know how to compile?

apefos

Great, downloading right now to see your results!

Are you free of magenta dots (hot spots) in both 3x crop zoom and in default 1920x1080p no crop?

N/A

Yeah I believe so. I don't have the patience right now to try and process these, but feel free to go for it.
7D. 600D. Rokinon 35 cine. Sigma 30 1.4
Audio and video recording/production, Random Photography
Want to help with the latest development but don't know how to compile?

apefos

Great! I am free of hot pixels also! The only workflow I know to do at this moment is to discard half of the lines (50% height downscale) and do deinterlace + interpolation for each iso. This is not ideal, better than this would be to get a way to preserve all the lines and reconstruct the images interpolating them... so I did not try yet. It will be better we try to find someone professional to develop it... at this moment I am waiting an email reply from ginger...

I just uploaded an original 100/800 mov dual iso h264 video in 1920x1080 @23.976fps (3x crop mode disabled) (no vaf-txi filter) just the default 600D T3i camera. I got good average colors in this, used a 1000 watts halogen lamp lowered by a dimmer and 2800k manual wb.

https://www.dropbox.com/s/4kwykig7jim1rvo/MVI_1814.MOV

Can't believe, it is amazing, three persons, three cameras, three uploaded videos, all free of hotpixels and good lines separation! This is very promissing! Fingers crossed for we get a great workflow!

Danne

I DO NOT RECOMMEND THIS
You can actually record dual iso on h.264 without a modified .mo. I tried this a couple of months ago on my 5d mark 3. It goes like this
1 - set up your dual iso settings
1 - Prepare your raw rec to lowest setting 640x360 and then disable the raw rec module
2 - get into normal camera rec mode and start record in h.264
3 - go back into the ML raw rec while recording h.264 and start raw recording then hit rec again which will start the raw recording aswell on top of h.264 recording.
I stop the raw with the normal button and the h.264 recording by hitting menu button.

Not the most beautiful solution and AT YOUR OWN RISK, of course. If a developer like to delete this suggestion, feel free to do so. This is not anything I practice at all and only did it for fun.

apefos

Quote from: Danne on November 23, 2013, 09:34:42 AM
I DO NOT RECOMMEND THIS
You can actually record dual iso on h.264 without a modified .mo. I tried this a couple of months ago on my 5d mark 3. It goes like this
1 - set up your dual iso settings
1 - Prepare your raw rec to lowest setting 640x360 and then disable the raw rec module
2 - get into normal camera rec mode and start record in h.264
3 - go back into the ML raw rec while recording h.264 and start raw recording then hit rec again which will start the raw recording aswell on top of h.264 recording.
I stop the raw with the normal button and the h.264 recording by hitting menu button.

Not the most beautiful solution and AT YOUR OWN RISK, of course. If a developer like to delete this suggestion, feel free to do so. This is not anything I practice at all and only did it for fun.

please explain to us why you do not recommend

also, tell us the advantages of using your method

thanks

apefos

I am using the 2013/11/23 build for 600D.102 from Latest Build download page. (select your platform = 600D.102)

http://builds.magiclantern.fm/#/

to enable dual iso H264 recording, I replaced the dual_iso.mo with the one I downloaded from here:

https://bitbucket.org/mk11174/magic-lantern/downloads/ML_600D_Dual_ISO_H264_Nov_22_2013.zip

There is a temperature issue in the 2013/11/23 for 600D.102, the display shows 70 Celsius, and until now the developers do not know if this is overheating or a display problem. I recorded some clips and got no Canon simbol overheating warning and no shutdown. I compared the 2013/09/28 build and the temperature raised to 60 Celsius. Not a big difference, so I will keep using the latest build...

Danne

Quote from: apefos on November 23, 2013, 09:52:11 AM
please explain to us why you do not recommend

also, tell us the advantages of using your method

thanks

As I said, this is only something I tried for fun, I don,t now if it could mess with code when recording both h.264 and raw simultaneously. There are no advantages whatsoever. Good luck anyway with testing. I see there,s a modified raw.mo for h.264 now better stick to that

apefos

Huston, we have a problem... (and some interesting ideas for solutions)

Update_1: I did a look in the timeline with 600% zoom and the lines position (horizontal position) is always the same from the begining to the end of the recorded clips. Even better, in my camera the lines position is the same for all the clips including clips with different frame rate. All frame rates I recorded was a multiple of 0.999 (24 x 0.999 = 23.976) (30 x 0.999 = 29.97) (35 x 0.999 = 34.965) this seems to avoid the waterfalling behavior.

when looking at the image in 100% zoom (default size) in computer screen, the perceiving of lines separation is perfect, you just see the different isos perfectly.

but when zooming in to 400% or 800% zoom, you start to perceive the lines separation is not so perfect, and the separation random changes among two lines for each iso / two lines for one iso and three for the other / two lines for one iso and the subsequent line as some kind of a merge and then the other iso comes...

so, the workflow using 50% downscale in height to get one line per iso does not work. also extracting two images using two lines per iso maybe will not work... (I said maybe because i did not try). So probably we will need a special algorithm which could consider these situations and reconstruct the two separated images or instead of reconstruct the separated images it could do some kind of fusion and get the merged result without separating the images.

this algorithm needs to be build by an expert, and this seems to be the point this idea will stop forever... unless someone is willing to develop this algorithm... the algorithm needs to find which line is iso 100, wich line is iso 800 and which line is a merge between them to do the job (the merged lines would be an average iso 300 result? or a random iso, something between the two isos...)

Or the algorithm would need to separate only the lines which are in the correct low/high isos and discard the ununsable lines. and then reconstruct the images using the usefull lines (maybe half of the lines will be usefull in each frame and maybe the position of good lines will be the same among all the frames in each shoot.)

Discard lines would not be so good because it increases aliasing and lower the resolution. Better solution would be using all the lines, a smart algorith which could to the job using all information from the frames. The information is there... but this is a job for an expert in computer programming...

Another good idea for the algorithm is it try to reconstruct the two lines per iso pattern before extracting each iso, finding the merged lines and doing some kind of "lift or low" in the luminance value for the merged lines, to get the perfect two lines per iso image before deinterlace+interpolation. Maybe this is the easy solution for building the algorithm. The algorithm puts numbers in lines and consider (first image = lines 1+2/5+6/9+10) and (second image = lines 3+4/7+8/11+12). Then the algorithm find the darkest lines for the low iso group and the lightest lines for the high iso group. All other lines will be consider merged lines and would be calibrated lifting or lowering the luminance to match the other lines... After this the algorithm does the deinterlace+interpolation considering two lines per iso, for the first group it puts the interpolated lines below the original lines and for the second group it puts the interpolated lines above the original lines, this is to keep height alignment between the two images, and then the algorithm merges them! I believe this will works, we just need an expert in programming...

Also, instead of separating the line groups before "lift or low" the merged lines, a better idea could be finding the darkest and lightest lines, compares the lines around them and do the "lift or low" in a way to reconstruct the two lines per iso. Just changing the order of things... lift or low before separating groups instead of separating groups before lift or low. or even better, doing the lift or low for the merged lines considering a lines pattern mask with the two lines per iso desing... the expert in programming can decide and implement this...

Important things are: 1 - the information is there, 2 - we are free of hot pixels, 3 - the recording is stable and with sound, 4 - it seems the lines position (good and merged lines) are the same among the frames of each shoot, 5 - color cast can be solved in post or in the processing moment.

Then a crazy idea comes to my mind: Maybe one line per iso will be the way to get this merge between the two isos already done in camera. the one line per iso will never get lines separation, instead of lines separation it will merge everything in a mess or if we have luck in the result we want already from camera!!! (as I said: crazy idea!)

See these images, the first image is a 100% zoom (default size) to perceive that for the eyes the lines separation is good, and the second image a zoom in to show that the separation is random and shows merged lines (maybe it will keep the same random position and same merged/good lines among all frames in each shoot facilitating the algorithm's job):




apefos

I updated the previous post with lots of new ideas for the algorithm, please read again

Important thing is we have ideas for the algorithm, the expert in computer programming will translate the ideas into the algorithm code, so if you have ideas for the algorithm, please share here...

apefos

I saw images in 600% zoom in timeline and the lines keep same horizontal position from begining to end of the clips and even same position in different clips and in clips with different frame rate in my 600D.

The two clips I downloaded from two other people shows lines in same position from begining to end of clips also, but lines position differs from one camera to another, not a problem for the algorithm.

The latest 2013/11/23 build for 600D.102 shows 70 Celsius, I compared with the 2013/09/28 build which shows 60 celsius, so maybe not a big difference... Only latest builds works with dual iso h264 recording.

Well guys, it seems my job is done... I know nothing about computer programming so developing the algorithm is not for me... :'(

this idea is now waiting for an expert in computer programming to make the algorithm to turn it into wonderfull HDR videos from cheap Rebel Canon DSLR cameras in glorious full hd 24p and even in 35p for slow motion. (is someone from magic lantern team willing to do?) :)

Many thanks and good by for now... 8)

apefos

Sunglasses off, I am back!  :)

There is a pattern! This makes things completely possible!

I tried the same technique used in the video I made earlier:

I will show four working steps from the same image, each image was extracted from each of the three cameras used in the h264 dual iso videos of this topic.

the working steps are:
Step A: pure image, just 400% zoom
Step B: the image with the height reduced by 50% to eliminate half of the lines and turn it into what would be a one line per iso, dual iso image
Step C : the even field deinterlaced with interpolation
Step D : the odd field deinterlaced with interpolation

Observing the step A, I realized that there is not a perfect separation among the two lines by iso as we would like to get, and we cannot understand how this separation was impaired, we just can see that some lines seem a mixture of different isos .

Observing the step B we can see that the lines which are a mixture of the different isos appear in a constant interval, ie, when the camera do the debayering and compressing, the mixture of isos in some lines happens in a constant interval. This means that we are encountering the beginning of a mathematical formula that can reconstitute these lines to their original luminance level, because everything indicates that the mixture of isos happens in some respect to a constant lines inerval and certainly the intensity range of the mixture will be the same when considering the same distance between lines.

Observing images of the steps C and D may establish that this really have the constant interval for the mixture luminance in lines and moreover we can also see that this variation occurs in a wave form, ie, I found that there is a frequency .

What the programmer of the algorithm need to do is discover this frequency and turn it into a mathematical formula to retrieve the correct luminance of each line of the image. Probably no need to separate the images into groups of different isos in the first moment, but to implement the function that removes the wave frequency in luminance variation caused by the camera debayering and compression, and thereafter, with the correct luminance in the reconstructed lines, the algorithm will can separate the two images with different isos.

As you can see in the pictures below the frequency of luminance variation is the same for all the three cameras, which proves that the algorithm will work on all cameras.

You can also notice that the frequency of luminance variation is the same in Steps C and D, only presenting an inverted wave, which would be expected because these images correspond to the even and odd fields respectively.

No need to worry about the deinterlaced and interpolated images shows the two isos in varying waveform, it is precisely this that the algorithm will correct .

This experimentation leads me to elaborate the theory that the camera debayer and compression can generate a frequency variation of the luminance of the lines in some waveform that could also be reconstituted in dual iso recording with one line per iso, suffice to discover the wave pattern frequency and turn it into a mathematical function.

Hope all this can help someone to develop the algorithm...

Scientific spirit is moved by curiosity and for not believing in the impossible. The scientist has a motivated spirit that never ends, the scientist is always thinking and taking/getting ideas from his mind. Scientist spirit does experiences, draws conclusions and elaborates theories, moves to a new, more enhanced experience, improved experiences turns to make a law theory. A scientific law is nothing more than a mathematical formula that has functional connection with reality, a mathematical formula, a function! And an algorithm may be composed of a number of functions.  :P

Was this a good homework? Do I deserve a try in one line per iso?

(images below are displayed with 400% zoom for better viewing of the lines, do not worry about the macroblocking , it is due to the 400% zoom )






apefos

Another interesting approach would be to forget about lines and think about some kind of debayer algorithm.

after finding the wave pattern frequency to recover the luminance of each line, instead of recovering the lines luminance, this information could be used in conjunct with the luminance and color information of the surrounding pixels and surrounding lines to do a debayering instead of deinterlacing+interpolation+merging

this approach makes my curiosity about one line per iso more intriguing, because finding the wave pattern frequency for the luminance in dual iso h264 video with one line per iso could make possible to creat a much better debayer algorithm than usin two lines per iso...

wow, my mind is blooming...

I strongly believe that if someone becomes willing to develop this algorithm, to see the behavior of one line per iso will be mandatory to understand and develop the algorithm, no matter if the final working solution will be with one or two lines per iso...

instead of debayering this algorithm can be called "DeLinering"

AdamTheBoy

I own a 5d mark 3 and atomos ninja 2 so I'm very curious about the potential of recording dual ISO video feed to the ninja and then processing.  Because it's uncompressed I wonder if we might have some really nice results.  Or is that impractical?

apefos

I think it is impractical, let me explain why:

Updated: maybe the folowing idea is also impractical due to we do not know if the h264 compression can retain enough information from each pixel, probably not...

I came back here now to talk about my next idea, and I believe it is the solution (if it can be implemented by the magic lantern team)

Please, follow this thoughts:

In the video I did compressing the DNG sequence into H264 I proved the lines separation can be retained by the codec. So a question was born in my mind: if the h264 codec can retain the lines separation why are we getting merged lines? simple answer: the problem comes from the debayer!!! when the camera does the debayer before compressing to h264 it hurts the lines separation, because the sensor debayer uses the information from surrounding pixels to reconstruct the hole image, and if a pixel is dark and the other is bright, the debayer will calculate an average result between them... so some lines comes merged, with a luminance between the two isos values.

so what is the MAGIC solution? simple! the solution is to implement a way to avoid the debayer algorithm in the camera and record the raw information compressed in h264 and in computer we will use a similar raw workflow, convert the h264 into DNG and do the same workflow used in raw.

and you can ask, so why not record in raw? because h264 allows full hd recording in the low budget cameras in 24fps, 30fps and even more.

what we need to do is to avoid the camera debayer algorithm and record the raw information compressed into h264

for me this idea makes perfect sense. does it makes sense to you all? it also makes sense for an external uncompressed recorder or lossless compressed external recorder.

would it be possible and easy to implement?

(in electronics this thing is called a "bypass")

we can try this in CBR 1.3x or even 1.8x or more if camera can handle.

please read the updated information in the beginning.

RenatoPhoto

Not even 5D3 has enough processing power to compress raw.  Been discussed previously at length.
http://www.pululahuahostal.com  |  EF 300 f/4, EF 100-400 L, EF 180 L, EF-S 10-22, Samyang 14mm, Sigma 28mm EX DG, Sigma 8mm 1:3.5 EX DG, EF 50mm 1:1.8 II, EF 1.4X II, Kenko C-AF 2X

apefos

Quote from: RenatoPhoto on November 23, 2013, 09:31:48 PM
Not even 5D3 has enough processing power to compress raw.  Been discussed previously at length.

it is not raw compressed into raw, it is raw compressed into h264, the bayer pattern recorded into h264.

but I already updated my previous post because maybe the h264 cannot keep enouch information for each red, green and blue pixels...

apefos

The beginning of "DeLinering Algorithm" (or "DeWaving Algorithm")

The first idea to start developing the "DeLinering Algorithm" is a static algorithm trusting the wavelength and wave amplitude is a constant over all the frame, from the first to the last line... I think "DeWaving Algorithm" is a better name because this first attempt does not removes the lines, it reconstruct the lines. It is considered static because it takes into account that the wavelength and wave amplitude will allways be the same, and it does not find the first line, the user must reposition the first high iso line in the first line of the frame (the beginning of the first wave) moving the video up in the timeline a little bit. the amount of moving probably will allways be the same for each camera, (as we saw in previous post each camera keeps the lines position in all recorded clips)

Idea is:

1 - shot a dual iso h264 video pointing camera to a white or grey/gray flat wall/paper with perfect ilumination distribution in all the frame.

2 - zoom the image in timeline to perceive the amount of lines in one wavelength, (x lines), (if needed downscale 50% in height to help perceiving the waves)

3 - with a color picker find the luminance value for each line inside one wavelength

4 - calculate the amount of lift or low to the luminance for each line inside one wavelength to reconstruct the "two lines per iso structure" in the x lines wich forms the wavelength

5 - each line inside the wavelength will have a equation like this (my luminance + y% or my luminance - y%)

5 - create the algorithm with 1080 equations, one per line, repeating the wavelength equations group

if wavelength and wave amplitude is a constant this thing will work

I can do everything but not the step 5 because I do not know computer programming...

http://www.en.wikipedia.org/wiki/Wave

Updated: looking to the images in my post showing the three cameras, it seems the wavelength and wave amplitude are not constant, ie, it seems luminance variations are random... but it needs the flat wall test to be sure.

apefos

Hi guys, I have great news!

I did the grey wall test and I found the wave!

And the best news: wavelength and wave amplitude are both constant! Not average constant, but perfectly constant!

I will try an attempt to write the algorithm! It seems I can do the step 5 from my last post, only thing I cannot do is to put the functions into a computer compiled file, but after everything is written, this step 6 will be easy and fast for someone else to do.

I will show images to demonstrate how things works.

;D  :)  ;)

But remember: there are chances for things to work, but nothing is proven in real world yet, so everything can fails!!!!!  :o

N/A

Good luck with this, would love to see it come to fruition.
7D. 600D. Rokinon 35 cine. Sigma 30 1.4
Audio and video recording/production, Random Photography
Want to help with the latest development but don't know how to compile?

britom

THAT'S GREAT F*CKING NEWS!
7D Builds with RAW support: http://bit.ly/14Llzda

apefos

Thanks guys, I will do my best!

Today is Sunday, and after lots of brainstorming, I think I deserve some rest!

As I said before I have some important work to do next days, so I will use the free time to go on with the project.

I got new ideas about the wave amplitude, needs tests...

I will keep you posted!

apefos

I decided to finish the work this Sunday to free my mind for my next job... so here it is:

I decided to created the algorithm considering the isos 100/800 combo (3 fstops difference). So it can also be 80/640, 160/1250 and so on... This is due to Magic Lantern advice says it is a safe maximum difference to avoid aliasing when merging the isos. Also I think it will be good idea to avoid noise increase so much, so 3 fstops difference does not increase the high iso so much.

It is possible to implement all isos difference in the same algorithm (100/200, 100/400, 100/800, 100/1600, 100/3200), see in the end of this topic how to do.

The isos will be called hi and lo (high and low). (this remembers me the A-HA band: "hunting high and low, ahhhh ahh ahhh...")

I did some tests with less and more light shooting the grey wall to perceive the wave amplitude better. In these tests I found when the highlight gets too high the results was better keeping the values a little below the maximum. So I found a safe maximum value result for the formula, and it is 245, just a little less than 255.

I foud it will be impossible to reconstruct the two lines structure. The reason is: the camera debayering process turns the two lines separation in a mixture of two or three lines resulting in the wave. After reconstructing the lines, the interpolation algorithm must do the job considering this pattern.

wavelength = 32 lines in 7 groups and 14 sub-groups.
each group contains the adjacent hi-lo lines
each sub-group contains hi or lo lines

the groups and sub-groups distribution inside the wave is:

hi-lo:
2-3
2-2
3-2
2-2
3-2
2-3
2-2

Numbers in the left are iso hi, numbers on the right are iso lo.
There are 3 lines together two times in the left and two times in the right.
The first two groups have same desing as the last two groups.
Three groups in the middle are in symmetric arrangement.
This design makes things calibrated.

So the algorithm is (see the attached original wave and reconstructed wave images below):



DeWaving Algorithm by Apefos
(for H264 dual iso with two lines per iso)


General Function: to reconstruct the lines in h264 video recorded in Canon DSLR using Magic Lantern Firmware using dual iso module with two lines per iso set to 3 fstops iso difference, removing the wave pattern generated by camera debayer algorithm and allowing extraction of two images with different isos.


Three suggestions to match the algorithm with the wave pattern in the image:

1 - User must reposition the video file height in timeline to put the first line of the first wave in the line 01 of the frame.

2 - Insert a wave 00 before the wave 01 in the algorithm and leave the image from the camera as is. The user types a number between 00 and 31 and the entire algorithm is moved down using the typed number as how many lines to move down the first line of wave 01. This way the wave 00 will fit the lines above.

3 - User types a number between 01 and 32 and this number will be the line where the first wave 01 starts. The wave 00 will fill the lines above/before the wave 01. (best option)

(After some try, the user will find the number for his camera).
(all video files from the same camera are expected to need the same amount of height reposition).


Main functions (rules all line functions):

main function A: decimal results below 0,5 = discard

main function B: decimal results equal or above 0,5 = +1

main function C: results above 245 = 245

main function D: (this must be decided by the programmer)
idea 1: line functions apply it's function separately to the R, G and B values of each pixel in the line.
idea 2: line functions apply it's function to the global luminance value of each pixel in the line, some kind of brightness adjust.
(luminance values are measured in RGB 8bit, from 0 to 255, for both ideas 1 and 2)


The Wave design: Line Isos and Line Functions:

wave 00 ( to process the lines before/above wave 01)
(repeat the lines from wave 01)

wave 01 (starts at line 0001) or (starts at line 0001 + number typed by user) or (starts at line typed by user)

wave line 01 = iso hi, line function: luminance x 1.132184
wave line 02 = iso hi, line function: luminance x 1.010356
wave line 03 = iso lo, line function: luminance x 0.523438
wave line 04 = iso lo, line function: luminance x 0.870130
wave line 05 = iso lo, line function: luminance x 0.592920
wave line 06 = iso hi, line function: luminance x 1.031414
wave line 07 = iso hi, line function: luminance x 1.082418
wave line 08 = iso lo, line function: luminance x 0.770115
wave line 09 = iso lo, line function: luminance x 0.957143
wave line 10 = iso hi, line function: luminance x 1.238994
wave line 11 = iso hi, line function: luminance x 1.020725
wave line 12 = iso hi, line function: luminance x 1.387324
wave line 13 = iso lo, line function: luminance x 0.837500
wave line 14 = iso lo, line function: luminance x 0.683673
wave line 15 = iso hi, line function: luminance x 1.036842
wave line 16 = iso hi, line function: luminance x 1.000000
wave line 17 = iso lo, line function: luminance x 0.663366
wave line 18 = iso lo, line function: luminance x 0.917808
wave line 19 = iso hi, line function: luminance x 1.368056
wave line 20 = iso hi, line function: luminance x 1.020725
wave line 21 = iso hi, line function: luminance x 1.270968
wave line 22 = iso lo, line function: luminance x 0.817073
wave line 23 = iso lo, line function: luminance x 0.807229
wave line 24 = iso hi, line function: luminance x 1.042328
wave line 25 = iso hi, line function: luminance x 1.010256
wave line 26 = iso lo, line function: luminance x 0.582609
wave line 27 = iso lo, line function: luminance x 0.893333
wave line 28 = iso lo, line function: luminance x 0.523438
wave line 29 = iso hi, line function: luminance x 1.026042
wave line 30 = iso hi, line function: luminance x 1.158824
wave line 31 = iso lo, line function: luminance x 0.788235
wave line 32 = iso lo, line function: luminance x 1.000000

wave 02 (starts at line 0033) or (starts at line 0033 + number typed by user) or (just after previous wave)
(repeat same lines from wave 01)

(and goes like this until reaches line 1080)
(the 1080 lines needs 33.75 waves, so the algorithm can have 35 waves (from wave 01 to wave 34 plus the wave 00 before/above the wave 01, total = 35 waves)

End of algorithm.

To create the lines function table for the other isos difference: shoot a grey wall with perfect illumination, with the lens in f8 to avoid vignetting, defocused. In computer, convert the image to 16bit, discard colors turning into greyscale. Find the first line of the wave and use the color picker to find the luminance value for each line. Line 16 in the wave will be the maximum luminance value for high iso, line 32 will be the lowest luminance value for low iso. The other high iso lines must be multiplyed by a factor/ratio number to get same luminance value of line 16. The other low iso lines must be multiplyed by a factor/ratio number to get same luminance of line 32. In the conversion table of wave 01 in the algorithm, it shows which line is low iso and high iso. To find the factor/ratio number for each line, divide the luminance value of line 32 by the low iso lines luminance value, and divide the luminance of line 16 by the high iso lines luminance value. After finding the factor/ratio numbers, then create the table. (the wavelength is 32 lines and finding the correct first line, the line 16 and line 32 will be the maximum and minimum luminance values respectively, the wave's ridge and valley). To find the first line of the wave in h264 canon dslr dual iso video, compare the image pixels with the images below.

Other line functions can be created in the algorithm for each line, for example: hue correction, saturation correction, white balance correction...

See the wave and the reconstructed wave:


apefos

Last observations:

Someone needs to compile it into some kind of DUALISOH264.exe, so we can drag the MOV file over it and get the still images DeWaved. Maybe also with the isos extraction and interpolation... (I do not know computer programming) Or maybe the Ginger HDR people could do something...


My theory for the one line per iso version is:

the wavelength will be 16 pixels

the amount of merging will be higher, lines separation will be almost lost, or completely lost, so work with 2 stops difference will be smrt decision to help the camera debayer.

if the debayer in camera merges all the lines, instead of reconstructing the lo and hi isos, maybe it will be possible to adjust the luminance values of each line direct into the image.

it deserves a try...


And That's All Folks!!!