Raw video interlaced for SD camera

Started by otherman, May 24, 2014, 11:05:10 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

otherman

SD camera has big limit in resolution for raw recording, and interlaced video can be a good work-around. 1080i can be converted in a good-enough 1080p with 3D deinterlacing magic from avisynth and his filter, and this tecnique can bring fake-4k raw to CF camera too!
It's not the real thing, but 3D deinterlacing can bring some amazing resoult. I'll definitely pay for such a feature!

a1ex

Proof please.

Record a few seconds of full-resolution raw, discard odd/even lines to make it interlaced, then reconstruct it with your proposed method and compare the result with the original.

If your results are convincing, I'll implement it.

If not, I'll ask the moderators to move this to Duplicate Questions ;)

Audionut

Quoteimagesource("x:\00%d.tif", use_DevIL=true)
separatefields()
selecteven()
weave()
converttoyv12()
QTGMC(preset="slow")

source
interlaced frame
deinterlaced frame

Don't know why someone hasn't suggest interlacing earlier, the broadcast industry has been using it since the begging of time.  :)

a1ex

For deinterlacing, you need at least two source frames, otherwise it's just vertical interpolation. Also, if the scene is static (or a pan from a static scene), reconstructing the original frame is very easy (and you may as well take a still picture).

For our purpose, the source frame has to be be RAW (Bayer), not RGB. So, you will need RG lines from one frame and GB lines from another. Or, you may group lines by 2, as with dual iso, to keep full color info in each frame.

So, keep trying :D

Audionut

Yes, my source was 10 frames.

separatefields()  <- separate even lines into half height frame, same with odd lines.
selecteven()  < discard the resulting odd frames from above
weave()  < combine the even frames to produce original resolution frame
QTGMC()  < deinterlacer.

We should not be worried about the deinterlacing.  This is well tested (and accepted) thanks to the broadcast industry.  ;) 
Having to deinterlace double line interlacing, or separate RG - GB fields, this will be more complicated.  I'll see what I can do with some dual ISO frames.

otherman

I made the very same test of Audionut, so no point to upload it :D
I'm going to make some test with 2 lines ('couse I've not a clue to how to manage the  "RG and GB" thing)

Audionut

Yeah I don't think we should be worrying about deinterlacing, per se, if some other deinterlace(s) can perform the required action.  Reinventing the wheel :P

The issue will arise, if none of the currently available deinterlacers, can handle dual line interlacing, in any fashion.

otherman

For now, I'm tryng to make an avisynth script to make "2 lines" interlaced" script (to deinterlace after). Sugestion?

Audionut

Can't think of an easy way with avisynth.

I was going to use some dual ISO frames with "cr2hdr --debug-blend", which will output the separated frames.  Then just weave the bright, or the dark frames together.   :)

otherman

I manage to get this frame to experiment, and I believe we have to find a way to manage the "RG and GB" thing  :-\: 2 lines don't seems to work.


And after some test with "RG and GB", I know that if it's possible, I don't kow how to do  :'(

otherman

If somebody want to try something, this is a 5 frames clip interlaced 2by2

otherman

On RG and GB: we could reconstruct blue channel for the first field by motion estimation interpolation of the second field, and viceversa for the red channel, using full temporal resolution green channel for good motion estimation. Yes, I think this is the way; "2 rows" deinterlace lose too much details.

Audionut

a1ex, can you describe how to make cr2hdr output original resolution, bright and dark DNG?

Currently, it looks like it is interpolating the bright and dark exposure to as shot resolution.

a1ex

The closest thing is with --debug-amaze. However, before reaching this step, it will try to identify bright and dark fields, and in this case it will fail (so you need to plug some fake values instead of the autodetection).

I suggest loading the raw data in Octave (or Matlab if you have a license). If there is interest, I can make a tutorial, covering how to load raw files, how to apply some filters, how to export the output, and maybe even some basic debayering and a very simple converter for dual ISO.

This could be a good starting point for anyone who wants to research raw image processing algorithms (including FPN reduction, denoising, pink pixel fixing, super-resolution or whatever else).

otherman

I'm so near to get it! (the avisynth part :D)
I've sinthetized the new red channel for GB rows , but I'm getting a ghosts of near frames too. I'm sure it's just for my non-existent coding-skills.

Audionut, can you help me? This is my post on avisynth forum where I explain what I did and the problem.

Audionut

Quote from: a1ex on May 27, 2014, 09:28:59 AM
If there is interest, I can make a tutorial, covering how to load raw files, how to apply some filters, how to export the output, and maybe even some basic debayering and a very simple converter for dual ISO.

Registering interest.   :)

Quote from: otherman on May 27, 2014, 10:31:44 AM
Audionut, can you help me? This is my post on avisynth forum where I explain what I did and the problem.

I never got into motion estimation with mvtools.  Each color channel doesn't include full lines of resolution, only pixels.  So you may need to do an vertical separation also.  See page 5 of this PDF.

But then you only have quarter resolution detail for those channels that you want to interpolate., so it may not be useful.  It may be best to show a1ex that deinterlacing two fields source is possible, first. 

Once the theory is shown to be sound, I'm sure he will be more interested in taking it to the next level.   :)

otherman

I already did it.  :)
It's frustating to know to be so close

Audionut

Ah sorry, I meant vertical separation.  Check my edit.

otherman

from my experiment, with deinterlacing two fields source you get quarter resolution too  :'(

Audionut

How did you determine that?  The source (dual ISO) is half resolution interlaced, just like all other interlaced content.  Simply scanned differently.  A regular deinterlacer that expects single line fields, will struggle.

Donald Graft seems to think it will be extremely easy.  Just need to provide them with an accurate source I guess.

otherman


otherman

I don't unterstand: in a theoretical interlaced frame we have all red point (not half) of a time A, and all blue point of a time B. Is it not possible debayer it normaly, getting full red channel of time A (but no of time B), and full blue channel of time B (but no of time A)? Maybe have half green point for time A, and half green point of time B will get wrong output?

Untill some days ago, I thought any pixel had all 3 channels, I didn't know all debayer thing so... sorry for the stupid questions.


a1ex

If you debayer in camera, it's no longer RAW, it's 8-bit YUV.

You had YUV422 recording and silent pictures before raw was found, but nobody was impressed by that.

;)

otherman

I was thinking of debayer on pc and deinterlace after that: am I getting it wrong?

otherman

and... whait a minute: uncompressed YUV422 video? I want that!