Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Luther

#101
Quote from: Matikr on February 19, 2020, 10:18:00 AM
1) what's the best practice (setting wise) to be able to zoom at night on the stars?
On the "Prefs" menu, you can turn on the "Increase SharpContrast" inside "LiveView zoom tweaks". This will make it easier to see if your image is on focus or not. Also, on the "Overlay" menu, there's the option called "Magic Zoom" which you can play with.
Quote
2) Any suggestions (ML setting wise) for the actual shoot?
You can automate nearly everything using ML. On the "Shoot" menu, see the Intervalometer, bulb timer, advanced bracket, etc.
Quote
3) From your experience with the EOS-M, any recommendations of do and don'ts for such shots (like max ISO etc) ? nights will be cold if it makes any difference.
I don't own a EOS-M, but Canon ISO usually just go up to 1600. I wouldn't go beyond that.
Quote
4) intervalometer: - how to I stop it once it started (without taking out the battery) ? is there a way to set it to where the there's absolutely no gap between pictures taken (not even 1 sec)
About stopping, dunno. But you can set no gap. On the Canon menu, put the photo mode ("Drive mode") to "continuous shooting". Then on ML menu, set the Intervalometer to "Start after: 0s".
Quote
5) I enabled the "Global Draw" and really like the fact I can see the focusing distance , but is there a way to control the camera settings, or the X10 zoom while at that window? Iat the moment I need to switch to 1 of the regular displays by hitting the INFO button
See the LiveView zoom tweaks menu.
Quote
6) I was reading some on Dual ISO and ETTR, but is seems like it is not something I should use for long exposures, am I correct?
Dual_ISO won't help, but ETTR is important for any photo. The more light you capture, the better will be the SNR (less noise, more information). Also, in your case, you could make multiple exposures and combine them (HDR). This would reduce noise and keep everything within the dynamic range. I aways use and recommend HDRMerge.
#102
Damn, if it wasn't for the aliasing in some trees and the motion blur difference, I could easily mistake this for some kodak film from the 80s (particularly the 250D 5297). Very good job on the color grading @ZEEK. Can you explain a bit your post-processing steps? Any LUTs? What impressed me the most was the highlight rolloff it has, very film-like.
#103
@masc @Ilia3101 @bouncyball
Did you guys see this new feature on Rawtherapee? This is doing 'magic' with some of my photos:
https://github.com/Beep6581/RawTherapee/issues/5412

Might be too dificult to adapt the code to MLVApp, but something like that for video would be awesome. Maybe some day.
#104
Hey @Ilia3101, nice progress on the last commits about the raw wb, blue desaturation and changing reinhard to float. Couldn't test yet (might be able next week), but I'm trying to keep track on those change :)
#105
Quote from: Andy600 on February 01, 2020, 01:32:54 PM
You don't need an IDT for raw files in Resolve's ACES environment.
I don't think this is accurate. The IDT contains spectral data, while the DNG will contain only a simple matrix. While it's true that you can use ACES color space in any image, ACES is more than just the color space, it's a color management system. We have previously discussed this here.
Quote
Best practice is to target whatever colorspace the intended playback device displays and only ever grade for the device colorspace(s) you can physically view i.e. your monitor(s).
True, but you can easily convert Rec.2020 to any other display space without losing information. The same cannot be said for Rec.709 or P3.
Quote
Lastly. Do you really need to use ACES? I like ACES a lot and I can see the appeal for multi-cam, multi format shows, CGI and collaborative,cross platform workflows but IMO it's overkill for most things, especially if you don't fully understand how it works and what it's for.
If you're already using Alexa WideGamut, I agree. But going from Rec.709 processing to ACES is a huge jump in color quality.
#106
Nice tutorial, thanks. Some notes:
QuoteCinelog-C
There's no need to use that, ACES already has Log in it. There's a comparison here between Log-C and ACES.
QuoteACEScg
ACEScg is meant to be used in computer graphics (particularly in green/blue screen). Normally you should use ACEScct instead.
QuoteOutput Space: Input - Canon - Linear - Canon DCI-P3 Daylight
QuoteOutput Space: Input - Generic - sRGB - Texture
It's not a good practice to convert color spaces multiple times. Actually ACES was created to avoid these.
Also, the "best practice" now is to use Rec.2020 instead of Rec.709/P3.
#107
General Chat / Re: Looking to the future
January 24, 2020, 12:47:22 PM
Dude, read the article linked in OP. The article already talks about Fuji "auto" functions. The hypothesis in the article is if the camera will have more functions than that and go "fully auto". My answer is: phone cameras already do that, but this will not happen in professional photography. Just that, simple.
Quote"Auto" does not equal "reads the users mind."
Auto means automate. If you're telling the camera what to do, it's not automated and, by definition, it's not "fully auto".

End of story here, let's move on.
#108
General Chat / Re: Looking to the future
January 23, 2020, 02:41:14 PM
Quote from: meanwhile on January 23, 2020, 02:19:01 PM
It's not hard: you simply tell the camera.
Yes, but @garry23 said "fully auto". Telling the camera what to do is not "fully auto".
Quotehttps://en.wikipedia.org/wiki/Light-field_camera
You will have to choose the focus in post-production anyway, so not fully auto.
#109
General Chat / Re: Looking to the future
January 21, 2020, 10:48:12 AM
Quote from: garry23 on January 19, 2020, 08:02:40 AM
cameras could be fully auto, as they gather the data to allow you to 'fix' the exposure and focus in post.
Don't think this will be the case. At least not for professional photographers, because the settings are changed not just to get good exposure, but also to create the artistic effect you want. How would the camera know that you want shallow DOF even if you're in daylight? It won't. How would it know you want to make a long-exposure instead of having 1/1000 shutter speed? It won't.
What it could do, though, is to get the proper exposure and focus to a given scenary, basically what smarphones already do.
- Get shutter speed to the same focal distance of the lens you're using (50mm > 1/50)
- Get ISO as down as possible
- Get the lens wide-open, read the scene and get focus on the main subject.
- Get aperture in the best contrast (according to the MTF data provided)
- Decrease shutter speed until highlights are -1% exposed (ETTR)
- Auto WB
- If it still needs more light, increase aperture. Then, if needed, increase ISO
- Post-processing (normalize exposure, debayer, color conversion, some DCP/3DLUT > JPG compressed file)

The above would give the most sharp and highest SNR photo that the camera can give, but wouldn't account for what you as the photographer wants.
#110
Other experimental builds / Re: Cleaner ISO presets
January 21, 2020, 07:35:04 AM
Nice results.
QuoteSurely we all want less underexposed and overexposed pixels counts...?
I think the disagreement is just on the accuracy of the measurement method. In your video, how much of this better DR was caused by the ADTG tweak versus changes in exposure? If you're changing the ISO between stock 100 to 108, this will also increase exposure and could account for the changes in DR.
I doubt that's the case though (too big of a noise reduction in ISO 800) and I think these tweaked values should be of easy access to everyone (as simple as enabling a module).
Nice job @timbytheriver.
#111
General Chat / Re: Looking to the future
January 19, 2020, 07:11:58 AM
For me, any auto mode is useless. But it might be because the companies don't put threshold values. Being able to choose from what-to-what values the camera is restricted to use would make it much more usable. Also, if the auto-mode had decimal control over ISO would help to increase the accuracy for doing auto-ETTR.
Speaking of it, ETTR should be the default in all cameras using auto modes, since they can easily bring exposure down for display previews and compressed images (.jpg's).
#112
Works for me. I'm using FF 72.0.1 on Windows, with javascript disabled and ublock.
#113
Share Your Videos / Re: tommy's tree talks - season 2
January 16, 2020, 11:08:10 AM
Cool. There's a little too much magenta in the shadows... if it was not an artistic choice, see if you can bring the Red curve a little down, so the shadows get a cyan tone instead of magenta ;)

#114
General Chat / Re: New Canon Info
January 15, 2020, 11:59:12 PM
Interesting, thanks for sharing. This blog is amazing, there's many other interesting articles there.
Too bad this technology will probably be patented and only Canon will benefit from it. Same path and old story of others like Foveon, Panavision PX-Pro color spectrum filter, and the Arri FSND filter. I understand they need to make money, but we would have much better technology if those companies collaborated between them sometimes.

QuoteI dont think it's a big deal because 95% you don't get in touch with moire in general work, just time to time 2-3%
Afcourse it's good for the consumer but it's not a break deal
Well, one of the things analog film is still better than digital is exactly this area. Harsh edges, aliasing/moire plays a significant role for me in reducing this bad "digital feel". Also, this improvement is important for other areas, such as fashion photography.
#115
Quote from: Ilia3101 on January 09, 2020, 03:54:56 PM
No I meant does anyone know why Adobe image processing looks so much cleaner, it just shows nice grain instead of noise, while MLV App and other raw converters show blotchy colour ugliness.
Oh, I see. I guess it's either because of default chroma denoise or some trickery with the demosaicing (it uses AMAZE, IIRC, but the implementation might be doing something different). They also probably have better color processing and hot/cold pixel removal, which might affect the looks of the noise.
I think using Rawtherapee as a comparison together with ACR would also be useful, since the code is open and RT is very a stable and complete software.
#116
Share Your Photos / Re: Extreme ETTR example
January 09, 2020, 02:48:40 AM
Unless you were running from a lion, the best result would be doing proper HDR. DualISO should be reserved for fast-moving objects, like sports and journalism, IMO. If you have a near static image (such as the example from garry) and have a tripod, do HDR blending instead, IMO.
#117
Quote from: Ilia3101 on January 08, 2020, 11:25:48 PM
And does anyone know why adobe looks so much cleaner in terms of noise sometimes? It's frustrating.
I get the opposite effect. Can you post an example? AdobeRGB gives chroma noise in saturated cyan/blue for me. The three usable gamuts on MLVApp for me are sRGB, LogC and AP1.
#118
More samples here Ilia (shot with 50D), they might be useful because they also have skin tones in it. If there's anything else we could help in enhancing color processing on mlvapp, let us know:
https://we.tl/t-46aTP9T3wj
#119
Got it! Mixed lights is very tricky, indeed. On Resolve there's a tool called "Hue vs Hue". You can find the right hue of the lights (I think there's a picker) and then shift them to something close to what you see in real life. Sometimes this will cause other tones to shift too (in your case dark oranges, or other tones of red). In this case you can use the red curve in RGB curves to adjust overall red balance or use a mask to isolate only the red leds...
#120
Quote from: masc on December 21, 2019, 10:20:02 AM
The processing code uses platform independant standard C libraries, which use char* to describe the filename. char is 8bit.
How about using wstring (const wchar_t)? I've read some people recommending while I was searching for solutions...
Quote
(just) for windows :P (very uncool).
Microsoft, as always. If we had an alternative to Premiere Pro in free unix-like systems I would never use windows again.

edit - this guy explains well:
https://stackoverflow.com/questions/402283/stdwstring-vs-stdstring/402918#402918
Quote
Applications using char are said "multibyte" (because each glyph is composed of one or more chars), while applications using wchar_t are said "widechar" (because each glyph is composed of one or two wchar_t. See MultiByteToWideChar and WideCharToMultiByte Win32 conversion API for more info.

Thus, if you work on Windows, you badly want to use wchar_t (unless you use a framework hiding that, like GTK+ or QT...). The fact is that behind the scenes, Windows works with wchar_t strings, so even historical applications will have their char strings converted in wchar_t when using API like SetWindowText() (low level API function to set the label on a Win32 GUI).
#121
That tshirt is the demosaicing nightmare! haha.
Very nice video. Very industrial feeling, as always.

One note: in the 32s scene, the light in the keys are pinkish. But on 1m17s they are red. Don't know if that is inacuracy on the color processing or not. Are you using ACES?
#122
I've used Reinhard 3/5 on my last work. Very pleased with the result, compared to Log-C. Should be the default tonemapping, together with AP1 as gamut. Gives the best results for me.
#123
Feature Requests / Re: What About Black Shading ?
December 21, 2019, 06:57:07 AM
Quote from: timbytheriver on December 19, 2019, 09:49:16 AM
@Luther Do you think there would be a hit to performance if it's encoding another frame?
Don't think so, as it would just copy the frame inside each MLV. Another solution would be to have a "link" inside MLV metadata to point to the dark frame file. This way MLVApp and other software would know where the frame is, without the need to copy it on each MLV (it would also save some CF space). I like the second solution more because of efficiency.
Another idea would be to automate the creation of such dark frames, because it's more effective when the frame is taken shortly before the shot it will subtract to (for thermal reasons, I think). For example, if you have a event to photograph/record, you could do multiple dark frames using the module throughout the duration of such event. Based on the date of each MLV, the module could link the closest dark frame in this period of time. This would increase the dark frame effectiveness, as noise changes when the camera heats.
The module should also copy the current camera settings (shutter speed and ISO, mainly), because the noise also vary with exposure time and sensibility.
Don't know if averaging would be possible inside the camera, though. But folders could be created to store multiple dark frames at the same time, and then leave this task of averaging for MLVApp or other software (the averaging could be automated too).
Just some ideas.

Quote from: Kharak on December 19, 2019, 10:15:20 AM
But Ursa is not like DFA where you need to do it in post, you probably can, but that is not how the built in black shading works.
It works even on Raw? Hard to believe they are doing such low-level processing inside the camera. Applying that to debayered data inside the camera is easier, but on Raw it gets more complicated...
#124
@masc, I think that's because Windows uses UTF-16 instead of UTF-8. They have this function called "WideCharToMultiByte" to convert properly:
https://docs.microsoft.com/pt-br/windows/win32/api/stringapiset/nf-stringapiset-widechartomultibyte?redirectedfrom=MSDN
https://stackoverflow.com/questions/215963/how-do-you-properly-use-widechartomultibyte/215973#215973

Dunno if this will solve the problem, I don't have knowledge in programming...
#125
Feature Requests / Re: What About Black Shading ?
December 18, 2019, 06:15:55 PM
Quote from: timbytheriver on December 18, 2019, 04:31:45 PM
@Luther Thanks for the clarification! Think the RED does the calibration in-camera, which would be useful...
Aparently there's already a block in MLV format for dark and flat frame:
https://www.magiclantern.fm/forum/index.php?topic=24470.msg220691#msg220691

So, someone "just" needs to automate the process with some module and embed the frame in each .MLV file (MLVApp should also be modificed to recognize automatically).