Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Luther

#126
Chroma Smooth 3x3 (in "raw correction" menu) works for my 50D when recording >=400 ISO. I never use "chroma separation" (in "details" menu), because it always destroys chroma details.
Other thing you should be aware is that the most you push some effects (particularly Hue vs. Luminance), more chroma artifacts you will get. If you're using 3D LUTs, it might also create artifacts because of color space limitations.
You can also actively remove chroma noise using the denoiser in mlvapp, in last case scenario. Also, consider using a big processing gamut such as "ACES AP1" or "Alexa Wide Gamut RGB" (might decrease artifacts in highly saturated tones).
#127
Feature Requests / Re: What About Black Shading ?
December 18, 2019, 04:16:47 PM
Quote from: timbytheriver on December 18, 2019, 10:31:25 AM
Is 'Black shading' calibration (used by RED, Blackmagic URSA, et al.) the same as 'dark frame subtraction' which we have in MLVApp?
Yes, "black shading" is just a fancy name RED uses. Dark frame will also remove dead pixels. Some (more aggressive) FPN might not be completely removed using dark frame, though. There's another thread where a1ex and g3gg0 worked on FPN removal, try to find it...
Quote
I'm starting to believe that there is quite a lot of variance between people's camera sensors
Temperature is a big factor for CMOS sensor's SNR.
#128
Nice tutorial. I'll be following the "cleaner iso presets" thread.
#129
Very nice.
What lens did you use in the 1m36s scene (woman in red dress passes through guy with white tshirt)? It has a very cool bokeh, just like anamorphic, but there's no anamorphic distortion. I also like the shadows being green, gives this Fujifilm-look from Matrix.
Good job.
#130
Interesting @chjawadm ! It still interpolates the pixels, from what I understood of the paper. Their comparison is based on bilinear and not on the more recent AMaZe algorithm, so I dunno if it performs better or not than the current one. I think the advantage is that it has low computational complexity and can be optimized for GPU processing.
#131
Quote from: Kharak on December 06, 2019, 10:28:36 AM
It gets the same compression ratio as HEVC
Not really. AV1 has much better compression ratio than HEVC. This codec will be the new standard in some years. Google is supporting it, there's even a youtube playlist with some videos compressed with it.
Quote from: Kharak on December 06, 2019, 11:28:50 AM
Can I assume the reason for it being slow compared to h264/h265, being because most GFX cards today have HEVC encode/decoding chips ? Effectively making HEVC the standard?
It is still in development. The code needs to be optimized yet. But, yes, one of the reasons is that CPUs/GPUs don't have specific instructions optimized for it and also that it requires more processing 'steps' compared to H.264.
Just had this same discussion yesterday here: https://www.magiclantern.fm/forum/index.php?topic=24685.msg223139;topicseen#msg223139
#132
Quote from: Ilia3101 on December 04, 2019, 08:52:36 PM
- I don't want it to look like an iPhone app (MLV App's name containing 'App' is not enough to justify that)
I agree 100%.
Quote
- I like reflections and 3D-ness, I also the icons in macOS
I like it too, but for logos 3D is not very practical, because it must be something simple enough to be readly recognized. Also, it must be in vector (.SVG), in case you want to apply to other resolutions (print in some tshirt, stickers, etc).
Quote
My original logo is not amazing.
I actually like it. Maybe you guys could keep it simple and just stay with the original logo, but redraw on inkscape and try to simplify the design...
#133
Share Your Videos / Re: Thanksgiving Holiday
December 05, 2019, 12:46:47 AM
Nice. Are you using some LUT?
Quote
I rendered to HEVC(H.265) instead of AVC(H.264)
The issue with this is that it's slow. I'm using H.264 with these settings:

ffmpeg -i input.mov -c:v libx264 -preset slow -crf 18 -tune film -coder cabac -pix_fmt yuv420p -c:a aac -b:a 320k -metadata title="" -map_metadata -1 -map_chapters -1 -disposition 0 -hide_banner -movflags +faststart output.mp4


This is relatively fast in comparison with HEVC and gives a high quality output. If you're really worried about youtube losses, and you have a good internet connection, maybe try to upload a ProRes 422 LT directly. Either way, the bitrate for 1080p doesn't got higher than 5MB/s (yours is 3565, according to youtube-dl). If you use Premiere, see this plugin: Voukoder.
Can't await for AV1 to come. I've tested the rust encoder other day, but it is still too slow, unfortunately. The compression ratio is impressive, though.
#134
Hey, I saw the github issue about a new logo. How about a homage to the horse in motion? There's some nice icons here, as inspiration.
I don't like much the idea of using magick references... it was overused by many already (imagemagick, g'mic, etc), IMO.
#135
Very nice. There seems to me that it has a little imbalance on tint (green/magenta) on the shadows and midtones, but it's indeed very clean. The scene between 3min15s to 3min30s (when the sun hides behind the clouds) is very impressive how the camera still holds the DR well, even though it just dropped like 4-stops.
Nice job, and also thanks to @timbytheriver for refining the ADTG research.

ps: in ~4min30s you talk about plant communication. AFAIK, this is normally done by fungi (see Mycorrhiza).
pps: nice overcoat ;)
#137
Quote from: Danne on November 23, 2019, 02:59:12 AM
Output is dng deflate.
So what? This is a pro not a "con". Preserving it in raw format gives you much more flexibility.
Quote
Hdr movie sequences will not work here for instance.
Neither ffmpeg tmix.
Quote
Numerous occasions where hdrmerge will output badly merged files. Occasionally big issues in lows(crushing blacks), before highlights were messy too but seems fixed.
I use it for years. Never once this happened.
Quote
Bulk work not reliable.
This point I agree.
Quote
Clear winner as always, enfuse. By far.
I'll test it.
Quote
I think your last words in your post pretty much sums it up pretty perfectly ;).
Maybe. But criticism is necessary sometimes.
#138
Question: why not just use HDRMerge? Don't know if the averaging results are the same, but HDRMerge gives great results already... and not only it does averaging, but also gives you a extreme dynamic range output.
I like the idea of having more open source photo editing tools with new features, and you folks know that me and many others appreciate your work, but sometimes it feels a bit like "reinventing the wheel" or "NIH syndrome".
Big words from someone that doesn't contribute with code, I know. I mean no offense here. I should just shut up.
#139
Really like the colors. Well done.
#140
Nice progress here @reddeercity ! I still need to try the last build, but the first one work well.
#141
Me too. I don't like 3-way wheels. Everything that can be done in wheels can also be done using curves.
IMO, other things should be priority, such as enhancing the denoiser or adding deconvolution...
#142
Quote from: Ilia3101 on November 19, 2019, 03:55:18 PM
This way I assume there is no javascript being run, right?
Right. The issue is that most people just use their iframe, instead of the direct binary link, which requires javascript.
Quote
Is your's edited in Rawtheapee? The denoising look is not something I've seen before from mlv app.
On MLVApp. Indeed, the noise is strange there.

edit:
Quote
it is not done in the LMS colour gamut with white balance, but right after, when the image is in the final gamut.
Possible change for 2.0 release?
#143
Quote from: 2blackbar on November 20, 2019, 03:24:58 AM
What will happen if ill use one photo cloned multiple times and average? Will it do anything to it?
Won't have any effect. See:
https://en.wikipedia.org/wiki/Signal_averaging

I've been using HDRMerge for some time on photography, it does the same noise averaging, but while also doing HDR merge. Highly recommend to everyone that likes landscape photography.
#144
Feature Requests / Re: lens distortion correction MLV
November 20, 2019, 06:01:37 AM
Do you mean metadata on MLV files? See this thread:
https://www.magiclantern.fm/forum/index.php?topic=24470.0

For MLVApp support, see drawbacks explained by @masc:
https://www.magiclantern.fm/forum/index.php?topic=20025.msg216650#msg216650
#145
That 25mm might be really nice for landscapes.
#146
Academic Corner / Re: Underwater photography
November 20, 2019, 05:54:30 AM
Quote from: mothaibaphoto on November 19, 2019, 09:30:10 AM
what i call "whitebalancing" as it essentually it is in the core.
It's not. That's where you guys don't get it, even though I already tried to explain as simple as I could.
Color is not a constant as it is in white balance. It's not a slide to change between blue/orange. Color will vary based on how much light and saturation it has. Their method compensates those variations, unlike white balance. I really don't know how I could be more clear in explaining it. Read:
https://en.wikipedia.org/wiki/CIECAM02#Appearance_correlates
Quote
So, what exactly are they invented?
Thye glued together other research to be able to solve a specific problem (underwater digital image color correction) in a automated way.
Other algorithms can do that, such as Retinex, but you can't be sure it is accurate without capturing 3D maps and having spectral data.
Quote
They enhance some existent.
That's not exactly accurate. But even if it was, what's the problem with it? Technology evolves improving from the others.
Quote
If I, let say, 10 meters underwater, and all my scene within 1 meter deep - i can totally ignore that difference.
What are you even saying?
Quote
Do they really in such need that "color-accurate" images at that expence?
Some studies do require it. For example (I'm not a marine biologist, correct me if I'm wrong), you could analyse how color changes after time in a coral, depending on environment changes (temperature, normally). You can do that today, but not accurately.
Quote
They need to show something to justify they are not just having fun diving Lembeh strait on someone else's expence.
This is not the place for your political opinions. Twitter maybe.
#147
Academic Corner / Re: Underwater photography
November 18, 2019, 06:41:36 PM
I agree a1ex. Thanks for the links, the hackernews discussion gets quite technical.... I like that.
Quotewhen (or if) the ISO tweaks will become "mainstream"
I hope this happens.
#148
Post-processing Workflow / Re: Free Kodak Porta 400 Lut
November 16, 2019, 05:37:29 PM
How was it obtained?
#149
Academic Corner / Re: Underwater photography
November 16, 2019, 04:54:01 PM
Certainly will make a difference.
@mothaibaphoto what other software do is basically:
- Debayer the image
- Interpret it in some color space, using a simple matrix
- Apply white balance to it (will globally remove the blue cast of the water)
- Apply some contrast to the image (will globally shift saturation and luminance of color tones)

Now, from what I understood about the research, this is what is done:
- Water scatters light from the sun and creates the blue cast/haze.
- If you know some parameters, such as the water's density and other proprieties, it's possible to calculate the amount of scattering (see "refractive index" and Rayleigh scattering)
- The software calculates the distance between the object you're photographing and where the camera sensor is. Based on that, he can also calculate the amount of scattering
- But, camera sensors have different response curves than human eyes and, worst, they differ from each other (sometimes even between the same camera model there's a very small difference). So how could you calculate that? That's why we have spectral data. This spectral data "explains" to software how colors responds in the sensor and how to represent that in a computational way.
- Now that you have the "amount of light scattering" calculated and the "sensor spectral response", you just need to say to the software to subtract the first from the latter. So, instead of applying the changes globally, it will adapt color tones depending on the distance. This way you get near 100% of the actual color of the object you're photographing.

For example, if you're taking a photo of a puffyfish at 1 meter from the sensor, and 3 meters away behind it there is a coral, all these different colors will have different adaptations, because they have different distances from the sensor and, therefore, have different haze/scattering.

This is a very scientific way of correcting underwater images. Hence why @Ilia3101 called it "genius". And now I'm also calling it: genius. This is a great work and probably these people spent a lot of time and effort on it. Congrats to them, hope they get academic and industry recognition.
#150
Quote from: 70MM13 on November 13, 2019, 11:53:20 PM
here's part 2, filmed the next day with lots of full sun on the snow, creating nice sparkling highlights :)
Damn, that's beautiful. I wouldn't be able to distinguish between these images and the ones recorded with RED cameras.