Method for Getting High-Quality Photos from Crappy Lenses

Started by blade, September 30, 2013, 04:28:52 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


1%

This would be awesome for those cheap macro rings... I'm downloading the "code" zip... but is there a functional implementation of this?

wolf


hookah

5D3, Sigma Art 35mm 1.4, Tamron 24-70mm 2.8 VC, Tokina 11-16 2.8, Canon 50mm 1.4 + 100mm 2.8 macro + 15mm

JCBEos

This is just very smart... and awesome!

My brain hurts a little after trying to understand those mathematics...

but what if we could apply this to a high-end lens, would this make a better super sharp lens???

My Youtube Channel -> comments and likes welcome ;)

g3gg0

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

blade

Quote from: JCBEos on September 30, 2013, 09:02:05 PM
but what if we could apply this to a high-end lens, would this make a better super sharp lens???

As this is way out of my expertise, I cant say that that is true, however they present some shots with a normal lens :
Standard camera lens at f/4.5 (Canon Zoom EF 28-105 Macro) 

Hover above the picture to toggle the effect on and off...  It looks great.

http://www.cs.ubc.ca/labs/imager/tr/2013/SimpleLensImaging/standard_lens_and_multispectral_results.html

However, it does seem to take some CPU power, so in camera seems to be a no go. However it may be combined with other post stuff like dual iso. (again I am out of my league to claim such a possibility as a real world option) 
eos400D :: eos650D  :: Sigma 18-200 :: Canon 100mm macro

1%

From what I gather you would need to shoot their chart for each lens and then post process with that data on "normal" images. Seems very doable and the results look good. I wasn't thinking in camera but everyone is already doing PP anyway, what is one more step, esp on hazy pics you would toss.

maxotics

Yes, PP is the key.  If you can accept that, all kinds of things can be done. (Doubt it would add much time to your workflow).  I used the aForge library a bit when I was doing panorama robotics and it's amazing all the kinds of things it can do.   There is a fair amount of open source code out there which could be integrated into ML PP.

AFAIK there are really only two major type of distortions, or problems.  1.) Different wavelengths of light travel at different speed through glass so bend in unequal amounts and 2.) the image must fall on a flat plane (sensor).  Lens makers pull every kind of mechanical trick with coatings and then often add lenses just extend certain properties, like wide-angle, into the sensor cavity.  I don't see, theoretically, why you can't do what the article says.  In fact, I believe the camera makers are already doing something similar with pancake lenses.

Many image processors probably work mostly with 24-bit color, instead of the bayer data.  I believe you can be really effective with bayer data.  Also, the MLV format would allow us to save information to configure later PP. 

I would think if you locked the mirror up on many of the Canons, and made lenses that go into the cavity, and used software like that, you'd have not only a cheap RAW video solution, but a SMALL one--at almost every focal length!

Very cool stuff indeed!


SpcCb

Very interesting, indeed.
Looks like a Richardson-Lucy deconvolution.

We need high precision PSF of the couple camera + optic (sampling dependant). Like it could be done in a lab.
And if the subject moves... To take a super computer to make deconvolution iterations. ;)


PS: JCBEos > already done :) -> http://www.eos-numerique.com/forums/f67/plan-large-sur-la-californie-216558/

arrinkiiii


1%

QuoteWe need high precision PSF of the couple camera + optic (sampling dependant). Like it could be done in a lab.

So you can't compute the PSF at home?

SpcCb

Quote from: 1% on October 01, 2013, 01:21:58 AM
So you can't compute the PSF at home?
If you don't expect high precision PSF/results it could be done 'at home' with a special laser spot and an optic analysis software like WinRoddier.
Or by night, using a star instead of the laser. Actually, I prefer to take datas from a star, it's easer, even if turbulence etc. affect readings on a star, because we need a very small laser spot to get good datas, or place the laser very far to have a very small/coherent spot.

This process is well known in astronomy to test optics or to get PSF and aberrations for post-process deconvolutions.
But I think it could look complicated for regular users (?).

ItsMeLenny

Chromatic aberration is caused by the fact that colours have different wavelengths.
The cheap 50mm from Cannon, despite having multiple elements, has some minor aberration.
Also, holga creates plastic lenses, which the idea of them is to get a shitty look,
however I wonder if this equation would work on holga lenses as they are plastic rather than glass,
they would have a different refraction number or what not.

SpcCb

Chromatic aberrations are caused by wavelengths and refraction index of the lens inside the optic.
More the index is, less they are. And more the wavelength is, less they are.
Beside, in the real world, more the focal length is long more it's easy to correct aberrations (because of the length of the displacement f/[angle]).

Optical equations are working for all cases, but it's very hard to compute the equation for optics with many lens; it depends of the optic formula..
If you look for the formula of the DO Canon optic, where you can find plastic lens arranged in accord of the Fresnel lenses formula, it's not simple than with a single BK7 lens optic. Same thing with high end optics with frontal low dispersion triplet or quadruplet + IS & mobile focus groups.
Plastic or BK7 or Fluorite, and different wavelength, this is not what it's complicated. This is just different values in the same equation.

But here, the meaning to correct those aberrations is different.
It's like 'reverse engineering' by analysing how the optic is 'working' on a reference (PSF) and compute the inverse calculation to recover the reference without aberrations.
So if the optic has only one lens or 20, the computation is the same.

g3gg0

hmm how i understand the PSF, it is comparable to a 2-dimensional CDMA code.
this code is the inverse function to the distortion the lens causes.

what i wonder now - what is the effect of the unavoidable sensor noise?
will it - as some single-frequency noises - get averaged out, so the noise is mainly cancelled?
or will it cause severe trouble to the recovery algorithm?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Shield

Maybe this software could save some of the crappy soft Sony a99 footage I took last year?  The a99 looked like the example on the "left".  ;D

1%


g3gg0

have matlab licenses, embedded coder, simulink, targetlink - but i dont use any of them :)
today we had a closer look at the sources with some colleagues.
we will perhaps try to generate C code from the scripts if we have some spare time.

i could access the licenses over VPN from home with my business laptop, but i dont have any free space for matlab on it :)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Doyle4

see this on a popular photo forum yesterday... i must say soon as i saw i thought of the ML team haha.

ItsMeLenny

I'd like to see it run on this video clip, pretty sure they used an old lens to achieve the desired look, but I'd prefer it corrected and crisp.
Warning: Taylor Swift http://youtu.be/RzhAS_GnJIc

600duser

Astronomers take pictures in the most abysmal conditions imaginable. They figured out the future long ago.

Computational Photography for the win.....see Hubble deep field image for details !


Step 1 point

Step 2 press button

Step 3 your camera whirs quietly as it stacks a dozen or so frames at a thousand frames per second

Step 4 slowly increase the aperture of your smile as the 'quick preview image' re crystallizes into gigapixel heaven

You could shove a crisp packet into a toilet roll and use it as a lens....for some astronomers that would be a welcome upgrade given the paltry number of photons they have to work with 24/7/365

Landscape Astrophotography
and with the right software and hardware the whole thing can be automated
http://www.youtube.com/watch?v=Rydg7JGTAbw



You can also get frame stacking software for video. As many cameras like the 600d can take video at 30 or 60fps if you want to get a good picture of stuff that isn't moving then taking a few seconds of 'shaky hand' video is all the data you need. Frame stacking video turns your budget camera into an EOS MK III annihilator !...heck even a webcam can blow a $10,000 DSLR out of the water via frame stacking video..

Computational photography exchanges photon capture time for money. When you combine frame stacking with focus stacking and HDR stacking then that photo zoom scene in Bladerunner becomes a reality.

Future photographers will simply wave a wand about in the general direction of the subject and 'puter' will do the rest.  Imagine a device the size and shape of a packet of tic tacs.

The future of photography ENJOY !

http://www.youtube.com/watch?v=KCO4hO7CQ6A


g3gg0

@600duser:
the thing the original poster talks about, is totally different from drizzling and median.
it is about lens correction. you better read the paper.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

600duser

did i mention the trouble with Hubble ? i think i did

Lens correction is often used by astronomers, nothing stopping DSLR users from setting up calibration tests or even writing their own lens correction code. Ive done that myself for several projects.

Its good to see that Canon have included some basic lens correction features with the bundled software.

Digital Photography is about to undergo a major revolution.  A few more doublings of sensor size/pixel density and CPU power and the barn doors get kicked open.

painya

@600duser
Mind sharing some of those correction codes with the forum?
Good footage doesn't make a story any better.

600duser

When i get time ill prolly write some for the 600d, not high on my list right now.

Im not a photographer but i design test and build vision systems (androids)  so there is some crossover.  I have a room full of computers at my disposal so processing power is not an issue. Im experimenting with wide angles lenses, fish eye lenses and mirrors at the moment.  The field of view of most lenses is far less than what i need.

DSLRs have such a low frame rate for stills they are unusable for the kind of projects im working on where you need to take 10 to 120fps in low light conditions. Which is why i currently need a room full of computers.

Interesting article on lens correction
http://www.cambridgeincolour.com/tutorials/lens-corrections.htm

DSLR's are fine for photography, intermittent taking of stills but they have yet to come to terms with video or computing. So there are a lot of barriers in their way still. Webcams are plug n play, smart phones have powerful generic CPU's and PC like architecture.

I did a lot of home work before i bought the 600d, its ok i can live with it but its like a museum piece,  18th century tech badly blended with 21st century tech.  Having lived through the early days of computing its kind of funny seeing digital cameras go through infant school. Photography in the future will be as easy as shaking yer tic tacs !

Cameras of the future will be 50% battery by weight and they will posses immense parallel computation power. At some stage they will exceed the power of a typical surfers desktop. They will be liquid cooled and the lens glass probably made out of cheap lightweight plastic similar to the vid in the OP.  Something else will change, you wont have to try and keep the camera still, there will probably be a vibration device built into the camera to make it shake. This will be the sensors oscillating in order to gather more light and information. Invisible nano pulse lasers will assist with focus and rang-finding. Press the shutter and the ultra light plastic lens will prolly retract & recoil like a gun. You might even see compound eye type lenses further down the line.

One can imagine the inverse of this tech making its way into cameras.
http://www.youtube.com/watch?v=qOsibeDX8jM


How to Shoot 4K Video with the Galaxy Note 3



g3gg0

Quote from: 600duser on October 14, 2013, 07:08:26 AM
did i mention the trouble with Hubble ? i think i did

yep you mentioned hubble, but only the techiques stacking (median, averaging) and drizzle.
both are absolutely unusable or problematic, often even for landscape photography.
IIRC the hubble technology was mainly drizzle, but correct me if i am wrong.

this approach isnt new either - its quite old. i read such things a few years ago and co-workers use that in their own code.
but it is optimized very well and it can likely be used for our use case - if the process of calibration isnt that complicated.

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

600duser

Its the calibration bit which is tedious and probably why such techniques aren't used to their fullest by field photographers.

Here again greater general computation power can automate many of the steps 'intelligently' will prolly be a couple of years before techniques start to shine through to the consumer showroom. The switch to F1 lens will be a big gain, itz crazy how much light complex DSLR lenses throw away. You cant correct film easily in the dark room so in the past focus was on good glass.



Working on a fun project right now. Stuffing a crisp packet down a cardboard toilet roll tube and using that as a lens. Just like astronomers i work in abysmal light and field conditions. Stress testing to the extreme all in a days work. Ill also be making some 100% plastic lenses for my cannon 600d, just for kicks. I need some lenses anyway but don't fancy shelling out the $ they ask. The challenge here is to make cheap lenses even cheaper than old cheap second hand lenses but just as good or even better pic quality wise.

Now computation has become cheap enough and portable enough (thanks in part to lipos) it is now feasible to start building 'the ultimate' Computerised camera. 

Photographers really yearn for 'quality' but 'camera users' want utility.  So i think we will see cheap,light plastic lens+computation take off big time. When it comes to video, your minds eye does a lot of post processing. This is why early TV didn't have to wait until Full HD to be successful.  320x240 pixels pleasant enough to watch for casual viewing & 640x480 quite watchable.

600duser

This is some pretty crazy stuff ! Photography in 2050ad is gonna leave our eyebrows with purple fringing, lol

More computational photography. Object recognition allows you to apply HDR or tones to objects in the picture & much more besides.
http://www.youtube.com/watch?v=o8ukJuezF0w

Phase-Based Video Motion Processing.  Make a great Easter egg for Magic Lantern ? , :o ,  more groovy than 'the snowstorm'   
http://www.youtube.com/watch?v=W7ZQ-FG7Nvw

Light field
http://www.youtube.com/watch?v=q26mekrMoaY


g3gg0

i prefer realistic, non-sciene fiction concepts.
i prefer C code, not videos or power points.

sorry :)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Walter Schulz


SpcCb

Quote from: 600duser on October 14, 2013, 09:37:01 AM
(...)
Working on a fun project right now. Stuffing a crisp packet down a cardboard toilet roll tube and using that as a lens. Just like astronomers i work in abysmal light and field conditions.
(...)
Sorry to say that _I hope it will not hurt you_ but you are far of what astronomers do ;)

Astronomers work on science projects to understand and explain things, not for fun and marketing stuffs. And if astronomers use highly complex optics and camera is not because they like SF or Freud, it's because if you don't get good informations from the source nothing could 're-create' them after recording. As we say, good science is in first good observation.

However, I'm sure you can do very nice cosmetic corrections in pictures, better UI ergonomics, and found ways to do better in future dev. It's a very interesting job and good for users ;)


Note about drizzling (a very good process by the way, based on a biological phenomena): looks inapplicable here for DSLR photo/video because it needs multiple translated/rotated sources to work well. And IMHO, all those PPs are not relevant about ML: PPs have to be done in a computer, after shooting, especially since ML users shoot in RAW (someone still use JPEG here? :) ).

600duser

Drizzle
http://en.wikipedia.org/wiki/Drizzle_%28image_processing%29

I mostly work with Bitmaps for the vision project.  Not sure why Cameras don't export pictures as BMPS as they are a no fuss halfway house quality wise.  (Jpeg, BMP, RAW ) Jpeg is like MP3 , losing valuable data . I tend to work with simple uncompressed formats like BMP's & WAV

The techniques and software that i use lean more towards astronomy than stills photography but its a different kettle of fish.  I build eyes capable of reverse engineering reality (for machine minds, not human ones), whereas a camera, though eye like,  tries to recreate a photonic reality for human eyes.

There is a lot of cross over of course but the details differ. I have to very deeply process still images in less than 1/10th of a second. Optical flow,feature extraction & recognition, motion detection, horizon detection, pillar tracking,  multi axis orientation etc as you have to compensate and co ordinate eye,head,body and walking motion. Trying to figure whats moving, you, the environment or both !...imagine skidding on a banana while skateboarding on a train at night and in the middle of a thunderstorm. That about sums the complexity my vision system has to cope with unaided. Crisp packet toilet roll lenses are what i do in my tea break.

Im just as likely to reach for a biology book as a physics book. I guess that marks the difference between an android eye and a camera. There is some amazing potential in computational photography and the fields reinforce each other. I find the whole field of computational imaging fascinating. Bandwidth and lack of control over the cameras systemic issues i have to contend with.  One main  goal is to capture & deeply process 10 megapixel stills (BMP's) @ 100fps.   That's a gigabyte+  each second.... so  its not exactly kiddie stuff & requires a network of a dozen computers to carry it out in realtime...lets just say i no longer need central heating, air conditioning is now on my xmas list  :P

Edge detection. Similar to Focus peaking
http://upload.wikimedia.org/wikipedia/en/8/8e/EdgeDetectionMathematica.png


Im a big fan of getting UI/UX right, it makes all the difference especially in computer games another area of interest. Have LAN will play.