Underwater photography

Started by garry23, November 13, 2019, 11:54:54 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

garry23

I found this interesting and I thought others might as well.

This Algorithm Can Remove the Water from Underwater Photos, and the Results are Incredible
http://flip.it/QlkUdt

Luther


garry23

Sorry missed the original posting.


ilia3101

Quote from: c_joerg on November 14, 2019, 12:17:14 PM
Just the "Auto Color Corrections" in CS6 fits mostly the Problem

Good things these scientists weren't thinking like that.

Dmytro_ua

5d3 1.2.3 | Canon 16-35 4.0L | Canon 50 1.4 | Canon 100mm 2.8 macro
Ronin-S | Feelworld F6 PLUS

mothaibaphoto

Quote from: Ilia3101 on November 14, 2019, 12:58:17 PM
Good things these scientists weren't thinking like that.
Haven't seen something special in that youtube vid.
That "scientist" seems just reinvent auto white balance.
I've shot a lot of photo/video underwater professionally, so i'm know, what i'm talking about.

c_joerg

Quote from: Dmytro_ua on November 14, 2019, 04:17:16 PM
What housing do you use?
I only use my S110 with a Meikon housing at the moment.
The great depth of field offers some advantages underwater...

Quote from: mothaibaphoto on November 15, 2019, 08:12:14 AM
That "scientist" seems just reinvent auto white balance.
I've shot a lot of photo/video underwater professionally, so i'm know, what i'm talking about.
I played a lot with the white balance. I never got to the results of "Auto Color Corrections" in CS6.
It looks like, that it works also with videos.
EOS R


ilia3101

Saw that, very genius and inspiring.

mothaibaphoto

Quote from: dfort on November 15, 2019, 04:09:24 PM
Guys, there is more to this than reinventing auto white balance or applying auto color corrections:

http://openaccess.thecvf.com/content_CVPR_2019/papers/Akkaynak_Sea-Thru_A_Method_for_Removing_Water_From_Underwater_Images_CVPR_2019_paper.pdf
As far as i understand they reconstructing a 3D scene from the image and applying color corrections dependent on distance?
And " In all cases, the simple contrast stretch S1, which is global, works well when scene distances are more or less uniform".
So yes, this is something more, but can't much inspire someone more or less familiar with the subject.
By the way, the ability to use extreme white balance is crucial to get rich colors underwater(and Canons rock here, as no other cameras allow to use such exreme values):
https://www.dpreview.com/articles/5150953093/canon-1d-x-mark-ii-under-water-review

ilia3101

Quote from: mothaibaphoto on November 16, 2019, 02:37:01 AM
As far as i understand they reconstructing a 3D scene from the image and applying color corrections dependent on distance?
And " In all cases, the simple contrast stretch S1, which is global, works well when scene distances are more or less uniform".
So yes, this is something more, but can't much inspire someone more or less familiar with the subject.

IDK about the youtube vid, but in the paper if you look at it, they are using the depth map to undo the spectral absorption effect of water, also taking the camera's spectral response in to account. It reverts the depth colour differences within the scene quite accurately.

It's very smart colour corrections. Will it look better? Maybe... maybe it won't even make a difference.

Luther

Certainly will make a difference.
@mothaibaphoto what other software do is basically:
- Debayer the image
- Interpret it in some color space, using a simple matrix
- Apply white balance to it (will globally remove the blue cast of the water)
- Apply some contrast to the image (will globally shift saturation and luminance of color tones)

Now, from what I understood about the research, this is what is done:
- Water scatters light from the sun and creates the blue cast/haze.
- If you know some parameters, such as the water's density and other proprieties, it's possible to calculate the amount of scattering (see "refractive index" and Rayleigh scattering)
- The software calculates the distance between the object you're photographing and where the camera sensor is. Based on that, he can also calculate the amount of scattering
- But, camera sensors have different response curves than human eyes and, worst, they differ from each other (sometimes even between the same camera model there's a very small difference). So how could you calculate that? That's why we have spectral data. This spectral data "explains" to software how colors responds in the sensor and how to represent that in a computational way.
- Now that you have the "amount of light scattering" calculated and the "sensor spectral response", you just need to say to the software to subtract the first from the latter. So, instead of applying the changes globally, it will adapt color tones depending on the distance. This way you get near 100% of the actual color of the object you're photographing.

For example, if you're taking a photo of a puffyfish at 1 meter from the sensor, and 3 meters away behind it there is a coral, all these different colors will have different adaptations, because they have different distances from the sensor and, therefore, have different haze/scattering.

This is a very scientific way of correcting underwater images. Hence why @Ilia3101 called it "genius". And now I'm also calling it: genius. This is a great work and probably these people spent a lot of time and effort on it. Congrats to them, hope they get academic and industry recognition.

a1ex

Quote
This is a very scientific way of correcting underwater images. Hence why @Ilia3101 called it "genius". And now I'm also calling it: genius. This is a great work and probably these people spent a lot of time and effort on it. Congrats to them, hope they get academic and industry recognition.

Indeed. Here are some related discussions:
https://news.ycombinator.com/item?id=21542184
https://old.reddit.com/r/BeAmazed/comments/dxyld1/two_scientists_created_the_seathru_algorithm_that/

A quick search for an open source implementation reveals this (I didn't try it):
https://github.com/dartmouthrobotics/underwater_color_enhance

and a second (older) paper:
http://openaccess.thecvf.com/content_cvpr_2018/papers/Akkaynak_A_Revised_Underwater_CVPR_2018_paper.pdf

The dismissal of this work as "reinventing auto white balance", coming from apparently knowledgeable users from our community, was quite disappointing in my eyes. You'll probably say the same when (or if) the ISO tweaks will become "mainstream" - considering the amount of time that went into that research was probably similar, but the impact - at least in my eyes - was much lower. OK, my reasoning was quite selfish :)

Luther

I agree a1ex. Thanks for the links, the hackernews discussion gets quite technical.... I like that.
Quotewhen (or if) the ISO tweaks will become "mainstream"
I hope this happens.

histor

Then more it is advertised, then further from science it becomes.
"Once trained, the color chart is no longer necessary," – they say. The water in my bath has different color every morning. ;) So does the ocean, I believe.
It's a good research but it's going to become a button push process for gopro or something alike. That's a pity.

And on the other side, there are also brilliant "unscientific" old ways to solve the problem like trashing the Blue channel (and reconstructing it from the rest) and guessing the depthmap (Kolor and Adobe used that for Dehaze feature).

mothaibaphoto

C'mon, guys. Sorry for disappointing you, but how can you even compare your work to this?
You do pure research, because your work is completely voluntary and you have no obligations of any kind.
That scientists work for salary/grants and maybe even want to commercialise the result.
They need to show something to justify they are not just having fun diving Lembeh strait on someone else's expence.
And they revealed the "Sea-thru", whoa, removing water from images!
After reading that comments on reddit i want to say just one thing - I really like people:
they can easily buy anythig with good branding and couple of clever tricks - "Sea-thru", "Theranos", etc. 
Back on subject. According to original papers, "Sea-thru" is:
1. Some color correction algorithm, what i call "whitebalancing" as it essentually it is in the core.
2. 3D map of the scene.
3. Method of calculating the amount of the color correction depending on the distance to any object according to the 3D map.
So, what exactly are they invented? Color correction algorithm itself? No they don't claim that! They enhance some existent.
New method of generating the 3D map from several images? No. They used commercial software from Agisoft LLC.
What is in the rest? The idea to take the 3D map into account itself? Does it so important in any given situation?
Let see how it's work. First, the light goes from the surface to bottom -  let say, H.
Then, it reflects from any object and goes to camera - D1. Light goes H+D1. Similarly, we have another object and D2. 
For all this idea to work we should have D1-D2 comparable to H.
If I, let say, 10 meters underwater, and all my scene within 1 meter deep - i can totally ignore that difference.
So, "Sea-thru" yilds very little difference(just compare S1 results - simple one button push in Photoshop - to S5) with very high price(need to generate 3D model) under very limited circumstances(the distance between different objects and camera should be significant, comparable to distance to the surface).
Who can utilise that? Photographers? I doubt that - too complicated with too weak advantages. Marine biologists? Do they really in such need that "color-accurate" images at that expence? Actually, they have thousands of methods to get answers to theirs questions or results, if they can't take something to the lab.  This, for example :)
Or this, great example of real dirty job of marine biologists:)

Luther

Quote from: mothaibaphoto on November 19, 2019, 09:30:10 AM
what i call "whitebalancing" as it essentually it is in the core.
It's not. That's where you guys don't get it, even though I already tried to explain as simple as I could.
Color is not a constant as it is in white balance. It's not a slide to change between blue/orange. Color will vary based on how much light and saturation it has. Their method compensates those variations, unlike white balance. I really don't know how I could be more clear in explaining it. Read:
https://en.wikipedia.org/wiki/CIECAM02#Appearance_correlates
Quote
So, what exactly are they invented?
Thye glued together other research to be able to solve a specific problem (underwater digital image color correction) in a automated way.
Other algorithms can do that, such as Retinex, but you can't be sure it is accurate without capturing 3D maps and having spectral data.
Quote
They enhance some existent.
That's not exactly accurate. But even if it was, what's the problem with it? Technology evolves improving from the others.
Quote
If I, let say, 10 meters underwater, and all my scene within 1 meter deep - i can totally ignore that difference.
What are you even saying?
Quote
Do they really in such need that "color-accurate" images at that expence?
Some studies do require it. For example (I'm not a marine biologist, correct me if I'm wrong), you could analyse how color changes after time in a coral, depending on environment changes (temperature, normally). You can do that today, but not accurately.
Quote
They need to show something to justify they are not just having fun diving Lembeh strait on someone else's expence.
This is not the place for your political opinions. Twitter maybe.

mothaibaphoto

Quote from: Luther on November 20, 2019, 05:54:30 AM
What are you even saying?
I'm saying ""Sea-thru" yilds very little difference(just compare S1 results - simple one button push in Photoshop - to S5) with very high price(need to generate 3D model) under very limited circumstances(the distance between different objects and camera should be significant, comparable to distance to the surface)."
My conclusion is based on the results, demonstrated in the original papers and my previous experience in the subject.
Can you be more specific in contradict?
Show some examples, not formulas.
Moreover, the article you reference is about "... to model the human perception of color. The CIECAM02 model has been shown to be a more plausible model of neural activity in the primary visual cortex, compared to the earlier CIELAB model"
What is in common between "more plausible model of neural activity in the primary visual cortex" and the task to get correct colors in underwater images? Are you sure you understand, what is that all about?
I'm not marine biologist too, but your hypothetical example is very doubtful: when coral gets unhealthy, it's "bleaching" wich is visible by naked eye. And i don't see a crouds of underwater professionals of any kind, excited by a new possibilities.