Debayer without interpolation?

Started by 50mm1200s, July 18, 2018, 04:08:00 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

50mm1200s

Does anyone know a software or research paper implementing something like this?


Kharak

"Like this" can be a lot, I see a Bayer pattern shifted around.

Rawtherapee has many debayering options, perhaps something like this
once you go raw you never go back

g3gg0

as far i can understand these fragments, the idea is to
- shoot [n] images
- detect motion between them (optical flow)
- overlay the raw images accordingly

if you have some motion now, due to vibration etc, different bayer pixels overlay now and you have a good estimate what R, G and B is at a specific "image" location.
wonder if it was meant to shift the sensor automatically (didnt hasselblad do this already using piezo?) or if the unsteady hands would deliver the required motion.
latter makes sense if you have motion *estimation* because with piezo you already know the motion.
we could capture "short 4-frame MLV files" like with silent picture and the post can be done on the pc, but...

*AFTER* someone has proven that it makes sense ;)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Danne

I tested something like this:
https://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/

Think it needs around 20 images for best results. Tested averaging with enfuse and align with hugin. Noise reduction was stunning. The resolution part was a bit harder to obtain.

garry23

Just to add another perspective to @Danne.

I wrote a script to auto create a set of images for super resolution processing: http://photography.grayheron.net/2016/12/a-new-technique-for-super-resolution.html

My approach was to be on a tripod and use refocusing to generate the frame to frame differences. You could simply IS jiggle, refocus or do both at each image.

Had some fun for a while but didn't develop the technique further.

histor

I used to like such superresolution tricks many years ago with a digital compact camera and PhotoAcute. It took a long time to understand that it's not superresolution, though it's much wider dynamic range. Camera sensor needs  a strong anti-aliasing filter, otherwise R, G and B pixels will reflect absolutely different sharp details of the subject. So the image is optically blurred just above the sensor - to get the needed resolution. Some astrophoto enthusiast try to scratch out microlense layer with the rgb pattern but the effect isn't as great as may be expected (pixel sensitivity drops much without the microlenses).

But we still get some good results averaging layers. We are sacrificing resolution (fine detail) while alignment and image rotation, but the noise falls dramatically. Just in case - the image averaging scripts for Photoshop are something good but not perfect. There is a number of averaging approaches. I'll refer to MaximDl manual - https://diffractionlimited.com/help/maximdl/Combine_Method.htm - a was a great pleasure to read this book even not using the program itself.

50mm1200s

Quote from: g3gg0 on July 18, 2018, 11:07:44 AM
as far i can understand these fragments, the idea is to
- shoot [n] images
- detect motion between them (optical flow)
- overlay the raw images accordingly

if you have some motion now, due to vibration etc, different bayer pixels overlay now and you have a good estimate what R, G and B is at a specific "image" location.

Yes, that's the idea...

Quote
wonder if it was meant to shift the sensor automatically (didnt hasselblad do this already using piezo?) or if the unsteady hands would deliver the required motion.
latter makes sense if you have motion *estimation* because with piezo you already know the motion.
we could capture "short 4-frame MLV files" like with silent picture and the post can be done on the pc, but...

Manual motion.

Quote
*AFTER* someone has proven that it makes sense ;)

If you have any research on this, hit me please ;)
I had this idea after having issues with demosaicing from the images generated on HDRMerge. So I thought that, maybe, debayering would not be necessary if you have enough images with a shift between them...

50mm1200s

@Danne @garry23 Thanks for sharing. But, I think these techniques presented just use a variation of noise-averaging. My idea was to use this pre-demosaicing, to get full color information...

50mm1200s

Found it! They call it "Bayer drizzle". The implementation seems to be from Dave Coffin (from dcraw). The software DeepSkyStacker uses it.
Would be nice to have it on HDRMerge. I'll see if Beep6581 have more information on this.

Danne

Hdrmerge, nice proggie. Some hurdles compiling on mac but built a compiler fixing the issues here:
https://bitbucket.org/Dannephoto/hdrmerge_compiler/src/default/

SpcCb

Quote from: 50mm1200s on July 19, 2018, 10:57:33 AM
Found it! They call it "Bayer drizzle". The implementation seems to be from Dave Coffin (from dcraw). The software DeepSkyStacker uses it.
Would be nice to have it on HDRMerge. I'll see if Beep6581 have more information on this.
The "Drizzle" algorithm was originally invented by Andrew Fruchter and Richard Hook for images made by the Hubble Space Telescope. I worked on it a couple of years during studying, it's an amazing algo. Several astronomy software use it, the first on PC's was Iris if I'm not wrong, since a decade or more.

There's some papers on Drizzle, take a look in the Harvard Library for the source : -> http://adsabs.harvard.edu/abs/2002PASP..114..144F

50mm1200s

Quote from: SpcCb on July 21, 2018, 03:23:32 AM
The "Drizzle" algorithm was originally invented by Andrew Fruchter and Richard Hook for images made by the Hubble Space Telescope. I worked on it a couple of years during studying, it's an amazing algo. Several astronomy software use it, the first on PC's was Iris if I'm not wrong, since a decade or more.

There's some papers on Drizzle, take a look in the Harvard Library for the source : -> http://adsabs.harvard.edu/abs/2002PASP..114..144F

Thanks!
Do you know any open source software implementing this method? From my research I was only able to find commercial/freeware software doing this.

edit: since video have already a motion between frames, maybe it would be possible to implement this for video too (at least on slow moving images, like landscapes, not for sport footages). Just thinking...

edit2: Found this open implementation.

khaja84

The technique you have posted is called "Pixel Shift" technique.
Currently, some cameras like Pentax K1 implemented this, but it is only useful for shooting static objects.
Accurate implementation of "Pixel Shift" technique is not possible via software (afaik).
However, Super Resolution technique stated on the page creates a sharp image with higher details but that too needs several shots....

Also, Sigma cameras uses "Foveon multi-layer sensor design" that uses three layers one for Red, one for Green, and one for blue. So, there is no need for interpolation.

50mm1200s

If anyone have any idea to contribute:
https://github.com/jcelaya/hdrmerge/issues/157#issuecomment-407909189

Maybe it's too complex to be done or this idea is just stupid ¯\_(ツ)_/¯

garry23

@50mm1200s

From my perspective I see this thread discussing two approaches.

One where the lens remains fixed and the sensor is controlled (sic) to be moved by a single pixel in each direction. Although some cameras can do this, Canon EOS running ML can't.

The other where you exploit 'random'  camera-lens movements, eg super resolution techniques, thus the script I wrote could be useful.

Bottom line : sorry I can't help you anymore, but good luck with exploring your ideas further.

Cheers

Garry

50mm1200s

Quote from: garry23 on July 26, 2018, 07:32:45 AM
The other where you exploit 'random'  camera-lens movements, eg super resolution techniques, thus the script I wrote could be useful.

This is correct approach, but there's no lens movement. The idea is very different from yours in many ways. First of all, your method basically uses focus stacking and SR and (seems) to act after demosaicing. The Drizzle idea is not to have a bigger resolution image per se, but to have full information for each pixel. As the bayer filter only have one value for each pixel (RGBG) that is then interpolated to generate the image, the demosaicing algorithm introduces many artifacts. This approach has many benefits (mentioned on the github link above).
Also, I think (not sure) your method cannot work with the "ZeroNoise" as handheld photos will require faster shutter speeds to not generate motion blur (so you will have to change sensor sensibility to achieve correct exposure). The Drizzle can work only on tripod, with fixed aperture, focus and ISO, only changing shutter speed for different exposures. Diagram (click to enlarge):




I don't get why this is difficult to understand. I'm not good with English, so that's probably it. :(

garry23

@50mm1200s

I understand what you are saying and appreciate the technique I coded up, ie super res, is different  ;)

BTW my script was written for tripod use and also is usable with exposure bracketing. That is each time the lens is jiggled and/or the lens focus changed, you can take a bracket set.

Each to their own  :)

As a say, good luck.

Cheers

Garry

Levas

am I right when I say, you want to merge multiple raw files into a new raw file.
So superresolution, but the allignment and merging must be done before debayering?

This piece of free software came up on multiple photo websites, Kandao raw plus.
It can merge up to 16 raw files and ouputs a new raw (DNG) file.
https://www.kandaovr.com/en/software
Fun to play with, but unfortanetely it can, do batch processing, so not usable on video dng sequences.
Although you can try out how 16 dng  frames of a mlv file merge together and if the concept works good in reality.

garry23

@Levas

I believe to OP wants to operate before any debayering. Would the kandavr software do that ?

Danne

Hdrmerge blends pixels doesn't it so tweaking that code should be closest to realizing this idea not? Libraw is open source well documented.

Levas

The kandao raw plus software merges up to 16 raw files into a single DNG file.
As far as I know, this new DNG file is raw and undebayered.

garry23

@Levas

Thanks, I may look at it myself.

Cheers

Garry

Danne

@Levas
Check Hdrmerge code. It can merge any amount. Undebayered. If not in code now most probaly tweakable doing anything the op needs...

khaja84

If you want to go with debayering a single image then the following algorithms seems to be good....

Self-similarity Driven Demosaicking  (BCMS-SSDD). Info: http://www.ipol.im/pub/art/2011/bcms-ssdd/
Directional Filtering and a posteriori Decision (DFPD). Info: http://blog.cuvilib.com/2014/06/12/dfpd-debayer-on-gpu/

megapolis

Here you can download free software for HQLI, DFPD and MG demosaicing algorithms:
https://www.fastcompression.com/download/download.htm
You can test DFPD debayer to evaluate image quality.
Please note that you will need NVIDIA GPU 6xx or later.