Anti Aliasing - crazy idea (script?)

Started by Andy600, November 03, 2013, 12:48:06 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Andy600

I usually shoot crop raw video on my 50D because aliasing can be pretty bad in non-crop mode and this got me thinking.

If the camera is line skipping (I presume it keeps line 1 and skips lines 2 & 3 etc) could it be possible to post-process a dng by splitting each vertical and horizontal line and space the resulting pixels apart as if they were not skipped i.e. pixel 1 - space -space - pixel 2 -space -space - pixel 3 etc then interpolate the missing data. I know 2/3rds of the data is missing and would need reconstructing which (I guess) would be very processor intensive and not accurate enough for full image reconstruction but it might be usable for compositing small areas where aliasing ruined a shot.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

a1ex

Sounds crazy; I doubt it will work.

Look up super resolution. You might have some luck with SR from a single image (http://www.wisdom.weizmann.ac.il/~vision/SingleImageSR.html) but I think a more effective way would be SR from previous and next frames in the video stream (the footage must be handheld!). The algorithm would be extremely similar to what I'm experimenting for denoising in dual ISO shots.

So, until I'll publish the algorithm, take a look at the papers I've posted in the dual ISO thread, and also at the Interframe, SVP, MVTools and whatever similar tools you may find.

I've also recommended this plugin a few times: http://www.infognition.com/super_resolution_vdf/
but nobody posted any feedback on it, so what I should assume? (a) it doesn't work; (b) virtualdub is for aliens; (c) the plugin is not free or (d) ?!?!?

Andy600

Thanks a1ex. That's some great info. I'll check out the SR plugin. It looks like it might just work :)

I've got some reading to do :D
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

swinxx

hi alex.
so you also experiment on an algorithm for preserving details. very nice..
is it planned to work with normal raw files - non dual iso files too?
thx.

Andy600

The Virtualdub SR plugin is ok but, to my eyes, this app does it a lot better http://www.compression.ru/video/super_resolution/super_resolution_en.html although there is a subtle chroma shift.

SR does very slightly improve aliasing artifacts in one of my problem videos but it's not handheld so it's maybe SR is less effective plus, I presume the down-scaling algo is equally important!?

As for the single image shots on that research page I personally prefer the Bi-cubic interpolation although it does soften the image a bit.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

maxotics

No watch runs the same, yet each man believes his own :)

Andy, I've created many test images with moire producing shots, and have studied them deeply in non-debayered mode.  I've written elsewhere that if you look at these images, with no pre-conceived assumptions about aliasing/moire, you would see very predictable interference patterns; that is, a row of pixels RGRGRG or BGBGBG have similar intensities, and relationships to each other as a FUNCTION of the angle of a straight line that produces the aliasing. 

I believe there IS a method to greatly reduce these effects, but must be based on blocks of rectangular pixels and think in a bayer pattern (not debayered). 

This epiphany has not hit Alex yet ;)  His code works around single pixel centers, which is a photo, not video (line skipping) way of thinking about it.  Sorry Alex! 

Andy, it's taking me a long time to put my focus pixel stuff into effect because there is no real documentation about how pixels are stored in ML RAW, or how they can be read or written.  there are reading routines in raw2dng and writing in PDR.  BUT, once I get past this I will look into video aliasing routines. You are NOT crazy. Alex has been very upset with 1% and how he doesn't push his stuff to the main dist.  I'm upset with everyone that they are not more helpful to those trying to work in this realm.  When they answer questions they spend no real time thinking about your problem or explaining the concept.  Everyone has a beef ;)

Fortunately, this stuff is such a time drain I know it won't be bad if I just throw up my hands and quit ;)  I know I'm not alone!


Andy600

I think you're right that everyone has their own beef (or agenda) but I do think there is a consensus on pushing ML development ever further and the leaps that a1ex and co have made this year are phenomenal. I only wish I could (or had time to) get my head around coding. I get stumped with even the most basic code, especially that which involves some math. I'm more the arty type :D lol

Regarding moire, I find that Rawtherapee and AMaZE takes care of most of it and anything that can't be fixed is probably a lost cause. I rescued a lot of shots thanks to a1ex pointing me to the RT/AMaZE workflow but my main issue is with some recent non-crop raw video that has a ton of aliasing. TBH it seems that this issue (which has plagued video for many years) is the holy grail for plugin devs. My suggestion is only thought through to the point of thinking 'What is line-skipping' and 'how many lines' etc and not taking into account temporal effects, interpolation of huge amounts of data an the like. I think the most anyone like me can do is to put ideas out there. 99% of them will no doubt have been thought of before and probably dismissed but you never know who is reading ;).

a1ex has been a great source of knowledge (not to mention all his ML developments) and it does push me to at least read-up on the things I have a use for or an idea about. It's great that you are looking deeper into the aliasing and moire issues and I hope you find something that may have been overlooked or at least gain a deeper understanding. Knowledge is what drives things forward :)
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

maxotics

Don't get me wrong, Andy, I'm not personally mad at anyone, just frustrated.  I'm sure they would answer 'get in line'!

Just so you know, I'm the first believer in trying to figure things out yourself before asking for help.  I've tried very hard.  I posted my code to Bitbucket and shared with both Alex and g3gg0 to show them I'm serious.  But again, this isn't a diatribe against them.  They quickly answer my PMs and certainly try to help!  The take away form my post is really ML is also held back, in my opinion, by a lack of effort to document the code and techniques in a way that gives someone a fighting chance to contribute.

When I asked myself if I was guilty of that I created a "Shooters" guide thread for the EOS-M.  Still, I'm no better than anyone else. 

As for AMaZE, as I said, these algorithms were created by people looking at moire from a photographic point of view.  There is no line skipping.  Yes, some work better than others.  There are huge differences, depending on what kind of moire issues you have.  However, the person that incorporates line-skipping into how they create their algos also will provide HUGE increases in quality.  I'm not joking when I'm trying to help A1ex become a star in this.  He is SO FREAKIN CLOSE, IMHO.   Maybe he needs a little bit more 1% in him ;)

Seriously, I HATE having to shoot in crop mode.  As an artist, I'm sure you'll agree.  If A1ex could conquer this issue (which I've offered to help him with, with test patterns), he would revolutionize RAW.  I don't think that's an understatement.  You tell me, as a filmmaker.

I wouldn't be surprised if Resolve is already going down that road.


Andy600

I get your point(s).

I think ML features are developed mainly because a dev has a personal need/wish for something and maybe it's also to push their own skillset and knowledge further. We are just the lucky ones who benefit with little or no contribution.

I can understand that if you have a 5D MKIII that doesn't really suffer the moire/aliasing issues of the lower-end cameras it might be a bit pointless to be thinking about 50D's etc and leave it up to other devs to port whatever you think up and lets face it, we're shooting raw video on cameras that are punching way above their weight as it is. Debayering algorithms are not strictly within the ML remit either and perhaps it's the devs of AMaZE, dcb etc that we should be talking to but, again, these people must have their own agendas and if they were shooting ML raw video they would no doubt already be here contributing their knowledge.

I'm not sure line skipping affects debayering as the bayer pattern is the same whether it's line skipping or not (IF the Canons keep 1 and lose 2 lines)!? I would love it if we could convert ML raw into another commercial raw format like ARRI or RED as they have some cool tools but then again, the sensors are different and specific for those cameras.

I don't mind shooting crop mode TBH except for the occasional framing cock-up or focus issue. I'm using my time with the 50D and ML raw to learn how to get the best results with what I have and learn Resolve etc. I don't know if I'll ever get the chance to do something commercial but at least I'll be better prepared because of understanding the raw workflow.

That said, yes, I would love line skipping to not be an issue but I'll just keep saving for the 5D3 and see what new developments come up in the meantime. a1ex's algorithm sounds interesting! :D

Is your code still up?
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

maxotics

Quote from: Andy600 on November 03, 2013, 04:49:41 PM
That said, yes, I would love line skipping to not be an issue but I'll just keep saving for the 5D3 and see what new developments come up in the meantime. a1ex's algorithm sounds interesting! :D

EXACTLY!  If I converted the same hours I spend on ML into business stuff I could buy a closet-full of 5D3s.  I could buy one now if I really wanted.  (instead I have $2,000 tied up in EOS-Ms and 50D etc).  I really shouldn't be spending as much time on this that I do.  For a lot of reasons.  I can't help it because I just love hot-rodding stock stuff.  I also like filming.  My original goal is to create a RAW quality camera that I can fit in my fanny-pack.  (I'm of the school, if I don't have a camera with me I might as well not have any at all).  So I'm a f'd up in the head as they come ;) 

The interesting thing about ML development, the more I get into it, is that it happens at all.  First, these guys pretend it isn't that hard.  That's complete bull__.  Yes, anyone with basic C skill can start to develop, but they couldn't get anywhere without putting in 100s of hours learning image formats, Canon/Adobe arcana,  bit-wise arithmetic, the tools, etc., etc.   What unites Alex, g3gg0 and 1% isn't development skill, IMHO, but tolerance for pain, huge amounts of pain.  They know EXACTLY how hard this is, which is why they do it.  They're on the bleeding edge of hackerdom. 

Well, my daughter wants to play with a camera, so I better deal with that :)












araucaria

The problem is that it skips sensels and not whole pixels. This methods avoids harsh aliasing (like with the nikon d90, which skipped whole pixel lines). So the camera skips this way:
R G
G B
R G
G B
R G
G B
R G
G B

So when you have an area with a lot of contrast the bright part (a fine dark grass leaf with sky behind) the sky gets read by RG and the dark leaf gets read by GB, so when it debayers the sensels it turns out like a red magenta one. Same happens when the bright falls on GB and the dark on RG, you will get a blue pixel.
There is no way to avoid this, interpolating pixels would only help against stairstepping aliasing but I didn't find that one too bad on the 50D.

Correct me if I'm wrong :D

3pointedit

I don't recall seeing this color corruption on the h264 in camera output, surely it is getting the same chroma mis-sampling before compression? I wonder what the chip is doing to alleviate the magenta/blue error? As it isn't the same as false color moire.
550D on ML-roids

araucaria

It's there but you don't notice it that much because of the muddy h264 processing.

maxotics

I'm in a bit of a rush, but here are two small parts of an image of a grid that will produce these effects.  I'm shooting a grid pattern used in crocheting.  By rotating the camera I can see how the interference patterns are produced.  I hope to create a video of this using non-debayered (raw) frames.

Here is the bayer pattern



Now here it is debayered using bilinear (which is weak, but good for this purpose).



If you look closely at this, and others I will post, you see certain seemingly predictable patterns.  I believe if you can calculate the angle of any hard line in the image, you can detect and fix (not perfectly, but greatly improve) the artifacts produced.

SpcCb

Very interesting discussion!

araucaria / maxotics > If the source line skip pattern is:
R G
G B
R G
G B
R G
G B
R G
G B

How the matrix could be like that (?):


There's something||somewhere I don't understand.

gary2013

Quote from: Andy600 on November 03, 2013, 04:00:48 PM
I think you're right that everyone has their own beef (or agenda) but I do think there is a consensus on pushing ML development ever further and the leaps that a1ex and co have made this year are phenomenal. I only wish I could (or had time to) get my head around coding. I get stumped with even the most basic code, especially that which involves some math. I'm more the arty type :D lol

Regarding moire, I find that Rawtherapee and AMaZE takes care of most of it and anything that can't be fixed is probably a lost cause. I rescued a lot of shots thanks to a1ex pointing me to the RT/AMaZE workflow but my main issue is with some recent non-crop raw video that has a ton of aliasing. TBH it seems that this issue (which has plagued video for many years) is the holy grail for plugin devs. My suggestion is only thought through to the point of thinking 'What is line-skipping' and 'how many lines' etc and not taking into account temporal effects, interpolation of huge amounts of data an the like. I think the most anyone like me can do is to put ideas out there. 99% of them will no doubt have been thought of before and probably dismissed but you never know who is reading ;).

a1ex has been a great source of knowledge (not to mention all his ML developments) and it does push me to at least read-up on the things I have a use for or an idea about. It's great that you are looking deeper into the aliasing and moire issues and I hope you find something that may have been overlooked or at least gain a deeper understanding. Knowledge is what drives things forward :)
Andy, You sound a lot like me on all of this. I can't help being a programmer but I can hopefully raise questions, test deeply from a users standpoint and maybe stimulate some thinking for others. We all want the same goals.

Gary

gary2013

Max, you are not alone.  :) Keep up the good work, please.

Gary

gary2013

I think what bothers me the most is the movement from aliasing (jaggy lines), moire and digital noise (mosquitoes). They all dance around in video, frame to frame which is very noticeable. I think if it was static throughout the video, it would be far less of a problem and probably not noticed by many people.. But complete elimination is certainly the best solution. I am sitting here now looking at some video of a parking lot and all the white lines on the ground (diagonals) are constantly wiggling around. Bugs the hell out me and it keeps making me seriously think about giving up on all these DSLRs for video recording. If anyone can crack these three things, that would be right up there at the top with all the other huge improvements since ML started.

Gary

maxotics

Quote from: gary2013 on November 07, 2013, 04:42:25 AM
Bugs the hell out me and it keeps making me seriously think about giving up on all these DSLRs for video recording. If anyone can crack these three things, that would be right up there at the top with all the other huge improvements since ML started.

I couldn't agree more.  First, to answer the question above, there is both line skipping horizontally AND vertically.  One of the obstacles to solving this problem is that there is little software that works with RAW video in a workbench way.  You can view RAW images in Photivo, for example, which I used in the above, but not video.  Part of what I'm doing is creating a viewer, based on g3gg0's MLV player, that would make it easy for me, or anyone to test algos for fixing this with RAW video NOT debayered.  Again, one of the reasons I don't believe this has been fixed is most people only have access to video that is already de-bayered, and that's too late!  You need to create, and test algo, against RAW images and analyze them for success, then de-bayer for test.


gary2013

Quote from: maxotics on November 07, 2013, 04:57:52 AM
I couldn't agree more.  First, to answer the question above, there is both line skipping horizontally AND vertically.  One of the obstacles to solving this problem is that there is little software that works with RAW video in a workbench way.  You can view RAW images in Photivo, for example, which I used in the above, but not video.  Part of what I'm doing is creating a viewer, based on g3gg0's MLV player, that would make it easy for me, or anyone to test algos for fixing this with RAW video NOT debayered.  Again, one of the reasons I don't believe this has been fixed is most people only have access to video that is already de-bayered, and that's too late!  You need to create, and test algo, against RAW images and analyze them for success, then de-bayer for test.
I understand. I wish I could help more. I have been finding a lot more threads on ML and other forums. Reading a lot everyday to see what is going on and trying to learn what I can.

3pointedit

So these 'boundary skipped sensels' are being targeted in OLPFs? I guess that they get an averaged value of light at that location so that there are no transient peaks that would confuse the sensel groupings. Duplicating the previous line would result in jagged or missing edges. Can any information be reconstructed from those corrupted sensel sites?

I guess easiest solution would be interframe substitution, but that would be slow and require movement across the sensels.
550D on ML-roids

maxotics

Quote from: 3pointedit on November 11, 2013, 03:29:41 AM
So these 'boundary skipped sensels' are being targeted in OLPFs? I guess that they get an averaged value of light at that location so that there are no transient peaks that would confuse the sensel groupings. Duplicating the previous line would result in jagged or missing edges. Can any information be reconstructed from those corrupted sensel sites?

I guess easiest solution would be interframe substitution, but that would be slow and require movement across the sensels.

I don't believe the camera devs have any control over which sensels they read from.  They intercept the RAW stream that goes to the LiveView.  I've thought about analyzing nearby pixels for lines, but not what I think you suggest, which is an interesting idea, that the corrupted sensels may "inform" what kind of image is surrounding them.

This problem is not very high on my list anymore because ML, in general, is too disorganized and unprofessional.  This problem would require a team effort, and that's not ML's strong suit.   I don't mean that in a mean way.  It just is what it is.  The Black Magic Pocket Cinema cameras seem to be in supply now and that, and similar cameras, make ML worth some effort, but not too much ;)

gary2013

Quote from: maxotics on November 11, 2013, 04:23:00 AM
I don't believe the camera devs have any control over which sensels they read from.  They intercept the RAW stream that goes to the LiveView.  I've thought about analyzing nearby pixels for lines, but not what I think you suggest, which is an interesting idea, that the corrupted sensels may "inform" what kind of image is surrounding them.

This problem is not very high on my list anymore because ML, in general, is too disorganized and unprofessional.  This problem would require a team effort, and that's not ML's strong suit.   I don't mean that in a mean way.  It just is what it is.  The Black Magic Pocket Cinema cameras seem to be in supply now and that, and similar cameras, make ML worth some effort, but not too much ;)
I have been following it.

sick stu

Hi all , i too shoot NON crop and huff when i see a nasty moire pattern. Im also a Video Encoder in my day job and sometimes have to de-interlace video for clients. That got me thinking...For one of my shots in my Canon 50D Raw vid 'Raw Sun' i used a de-interlace or anti flicker filter in after effects and that removed a bit of moire on a shot of a boat at sea...
Then i thought of trying a quick test in virtual dub..turning a clip from Progressive to interlaced then applying a De-interlce
and blending the 2 fields and NOT throwing away field 2. The downside of 'blending fields' gives a slight motion smear
(as if a slow shutter speed has been used 24th of a sec ect) because the material is progressive to begin with it might not
cause that Smear/slow shutter effect. So treating the clip as Interlaced and then Blending Fields (lines) might eliminate maybe all moire as ive already applied it to one of my shots. To think of it I guess this process would have to be done at the C-DNG stage????
Just an idea...

LucianParaian

Just wanted to share the solution I found for reducing aliasing artefacts.

Taking sick stu`s idea into account, I applied a Field Blur Filter in Photoshop!!! 3 times, with the size of 1 pixel.
(You first need to convert your video layer into a smart object so the filter will be applied to the whole vid.)
Afterwards, you could apply an unsharp mask if you think it has lost sharpness.

The end result is not perfect, but I was happy with it.

Let me know how this is working for you. Or if you improved it in any way.