Method for Getting High-Quality Photos from Crappy Lenses

Started by blade, September 30, 2013, 04:28:52 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

600duser

When i get time ill prolly write some for the 600d, not high on my list right now.

Im not a photographer but i design test and build vision systems (androids)  so there is some crossover.  I have a room full of computers at my disposal so processing power is not an issue. Im experimenting with wide angles lenses, fish eye lenses and mirrors at the moment.  The field of view of most lenses is far less than what i need.

DSLRs have such a low frame rate for stills they are unusable for the kind of projects im working on where you need to take 10 to 120fps in low light conditions. Which is why i currently need a room full of computers.

Interesting article on lens correction
http://www.cambridgeincolour.com/tutorials/lens-corrections.htm

DSLR's are fine for photography, intermittent taking of stills but they have yet to come to terms with video or computing. So there are a lot of barriers in their way still. Webcams are plug n play, smart phones have powerful generic CPU's and PC like architecture.

I did a lot of home work before i bought the 600d, its ok i can live with it but its like a museum piece,  18th century tech badly blended with 21st century tech.  Having lived through the early days of computing its kind of funny seeing digital cameras go through infant school. Photography in the future will be as easy as shaking yer tic tacs !

Cameras of the future will be 50% battery by weight and they will posses immense parallel computation power. At some stage they will exceed the power of a typical surfers desktop. They will be liquid cooled and the lens glass probably made out of cheap lightweight plastic similar to the vid in the OP.  Something else will change, you wont have to try and keep the camera still, there will probably be a vibration device built into the camera to make it shake. This will be the sensors oscillating in order to gather more light and information. Invisible nano pulse lasers will assist with focus and rang-finding. Press the shutter and the ultra light plastic lens will prolly retract & recoil like a gun. You might even see compound eye type lenses further down the line.

One can imagine the inverse of this tech making its way into cameras.
http://www.youtube.com/watch?v=qOsibeDX8jM


How to Shoot 4K Video with the Galaxy Note 3



g3gg0

Quote from: 600duser on October 14, 2013, 07:08:26 AM
did i mention the trouble with Hubble ? i think i did

yep you mentioned hubble, but only the techiques stacking (median, averaging) and drizzle.
both are absolutely unusable or problematic, often even for landscape photography.
IIRC the hubble technology was mainly drizzle, but correct me if i am wrong.

this approach isnt new either - its quite old. i read such things a few years ago and co-workers use that in their own code.
but it is optimized very well and it can likely be used for our use case - if the process of calibration isnt that complicated.

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

600duser

Its the calibration bit which is tedious and probably why such techniques aren't used to their fullest by field photographers.

Here again greater general computation power can automate many of the steps 'intelligently' will prolly be a couple of years before techniques start to shine through to the consumer showroom. The switch to F1 lens will be a big gain, itz crazy how much light complex DSLR lenses throw away. You cant correct film easily in the dark room so in the past focus was on good glass.



Working on a fun project right now. Stuffing a crisp packet down a cardboard toilet roll tube and using that as a lens. Just like astronomers i work in abysmal light and field conditions. Stress testing to the extreme all in a days work. Ill also be making some 100% plastic lenses for my cannon 600d, just for kicks. I need some lenses anyway but don't fancy shelling out the $ they ask. The challenge here is to make cheap lenses even cheaper than old cheap second hand lenses but just as good or even better pic quality wise.

Now computation has become cheap enough and portable enough (thanks in part to lipos) it is now feasible to start building 'the ultimate' Computerised camera. 

Photographers really yearn for 'quality' but 'camera users' want utility.  So i think we will see cheap,light plastic lens+computation take off big time. When it comes to video, your minds eye does a lot of post processing. This is why early TV didn't have to wait until Full HD to be successful.  320x240 pixels pleasant enough to watch for casual viewing & 640x480 quite watchable.

600duser

This is some pretty crazy stuff ! Photography in 2050ad is gonna leave our eyebrows with purple fringing, lol

More computational photography. Object recognition allows you to apply HDR or tones to objects in the picture & much more besides.
http://www.youtube.com/watch?v=o8ukJuezF0w

Phase-Based Video Motion Processing.  Make a great Easter egg for Magic Lantern ? , :o ,  more groovy than 'the snowstorm'   
http://www.youtube.com/watch?v=W7ZQ-FG7Nvw

Light field
http://www.youtube.com/watch?v=q26mekrMoaY


g3gg0

i prefer realistic, non-sciene fiction concepts.
i prefer C code, not videos or power points.

sorry :)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Walter Schulz


SpcCb

Quote from: 600duser on October 14, 2013, 09:37:01 AM
(...)
Working on a fun project right now. Stuffing a crisp packet down a cardboard toilet roll tube and using that as a lens. Just like astronomers i work in abysmal light and field conditions.
(...)
Sorry to say that _I hope it will not hurt you_ but you are far of what astronomers do ;)

Astronomers work on science projects to understand and explain things, not for fun and marketing stuffs. And if astronomers use highly complex optics and camera is not because they like SF or Freud, it's because if you don't get good informations from the source nothing could 're-create' them after recording. As we say, good science is in first good observation.

However, I'm sure you can do very nice cosmetic corrections in pictures, better UI ergonomics, and found ways to do better in future dev. It's a very interesting job and good for users ;)


Note about drizzling (a very good process by the way, based on a biological phenomena): looks inapplicable here for DSLR photo/video because it needs multiple translated/rotated sources to work well. And IMHO, all those PPs are not relevant about ML: PPs have to be done in a computer, after shooting, especially since ML users shoot in RAW (someone still use JPEG here? :) ).

600duser

Drizzle
http://en.wikipedia.org/wiki/Drizzle_%28image_processing%29

I mostly work with Bitmaps for the vision project.  Not sure why Cameras don't export pictures as BMPS as they are a no fuss halfway house quality wise.  (Jpeg, BMP, RAW ) Jpeg is like MP3 , losing valuable data . I tend to work with simple uncompressed formats like BMP's & WAV

The techniques and software that i use lean more towards astronomy than stills photography but its a different kettle of fish.  I build eyes capable of reverse engineering reality (for machine minds, not human ones), whereas a camera, though eye like,  tries to recreate a photonic reality for human eyes.

There is a lot of cross over of course but the details differ. I have to very deeply process still images in less than 1/10th of a second. Optical flow,feature extraction & recognition, motion detection, horizon detection, pillar tracking,  multi axis orientation etc as you have to compensate and co ordinate eye,head,body and walking motion. Trying to figure whats moving, you, the environment or both !...imagine skidding on a banana while skateboarding on a train at night and in the middle of a thunderstorm. That about sums the complexity my vision system has to cope with unaided. Crisp packet toilet roll lenses are what i do in my tea break.

Im just as likely to reach for a biology book as a physics book. I guess that marks the difference between an android eye and a camera. There is some amazing potential in computational photography and the fields reinforce each other. I find the whole field of computational imaging fascinating. Bandwidth and lack of control over the cameras systemic issues i have to contend with.  One main  goal is to capture & deeply process 10 megapixel stills (BMP's) @ 100fps.   That's a gigabyte+  each second.... so  its not exactly kiddie stuff & requires a network of a dozen computers to carry it out in realtime...lets just say i no longer need central heating, air conditioning is now on my xmas list  :P

Edge detection. Similar to Focus peaking
http://upload.wikimedia.org/wikipedia/en/8/8e/EdgeDetectionMathematica.png


Im a big fan of getting UI/UX right, it makes all the difference especially in computer games another area of interest. Have LAN will play.