Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Luther

Pages: [1] 2 3 ... 9
General Development Discussion / Re: The MLV format
« on: February 20, 2020, 11:19:02 PM »
adding a proprietary block would be easy and we don't have a way to control that if someone does it.
But why is that a problem? I'm sorry, I can't see it. Other formats like DNG probably has some proprietary blocks implemented in some software, but the general structure is open... so why implementing something proprietary would hurt the project? It's not like these changes become mandatory to the open "standard" (such as what blobs do in operating systems), so I would think this is fine, no?
1. Someone looks directly at the mlv_structure.h LGPL header, writes it down on paper, and later rewrites the structures in C based on their paper notes, is their code under LGPL?

2. Someone looks directly at the mlv_structure.h LGPL header, and rewrites structures in C while looking at the original structures, is their code under LGPL?
I'm not really very acknowledged in that matter, but I think in both 1 and 2 it does apply the LGPL, since the structure would be exactly the same (and can be verified, which is more important in lawsuits).
3. Can structures even be put under a code license like GPL/LGPL? They are not code, but descriptions of how some fields are arranged in memory/files right? Or are they considered code?
I think so. Maybe GPL is not the best license in this case scenario. Markup languages such as XML, HTML and OWL have been under this W3C license for many years. Those languages don't have a specific code, but they do have a structure which other software can use.
But, again, I'm not an specialist on that. Might be better to ask on HackerNews or even stackexchange.

General Development Discussion / Re: The MLV format
« on: February 20, 2020, 09:43:15 PM »
Let's say we write an official MLV format specification... describing every block structure with illustrations and text etc...

Then someone writes out C structs for MLV blocks based on reading only the specification. Are their C structures under the LGPL/GPL like mlv_structures.h/mlv.h is?

AFAIK, no. If you write a model or specification in, say, UML/TLA+ the specific code will be GPL, but not the "by-products" of it. Also, you wouldn't know how to prove someone used the specification in the first place (unless he wrote in a design by contract language, such as SPARK). Also, LGPL permits to close the code, only GPL that doesn't.
Why that worries you? Or the question was just out of curiosity? If someone wants to implement another MLV encoder/decoder, I don't see a problem in that, this is exactly what open source should want, people freely sharing information.

1) what's the best practice (setting wise) to be able to zoom at night on the stars?
On the "Prefs" menu, you can turn on the "Increase SharpContrast" inside "LiveView zoom tweaks". This will make it easier to see if your image is on focus or not. Also, on the "Overlay" menu, there's the option called "Magic Zoom" which you can play with.
2) Any suggestions (ML setting wise) for the actual shoot?
You can automate nearly everything using ML. On the "Shoot" menu, see the Intervalometer, bulb timer, advanced bracket, etc.
3) From your experience with the EOS-M, any recommendations of do and don'ts for such shots (like max ISO etc) ? nights will be cold if it makes any difference.
I don't own a EOS-M, but Canon ISO usually just go up to 1600. I wouldn't go beyond that.
4) intervalometer: - how to I stop it once it started (without taking out the battery) ? is there a way to set it to where the there's absolutely no gap between pictures taken (not even 1 sec)
About stopping, dunno. But you can set no gap. On the Canon menu, put the photo mode ("Drive mode") to "continuous shooting". Then on ML menu, set the Intervalometer to "Start after: 0s".
5) I enabled the "Global Draw" and really like the fact I can see the focusing distance , but is there a way to control the camera settings, or the X10 zoom while at that window? Iat the moment I need to switch to 1 of the regular displays by hitting the INFO button
See the LiveView zoom tweaks menu.
6) I was reading some on Dual ISO and ETTR, but is seems like it is not something I should use for long exposures, am I correct?
Dual_ISO won't help, but ETTR is important for any photo. The more light you capture, the better will be the SNR (less noise, more information). Also, in your case, you could make multiple exposures and combine them (HDR). This would reduce noise and keep everything within the dynamic range. I aways use and recommend HDRMerge.

Share Your Videos / Re: EOS M RAW + C-MOUNT ELGEET 13MM F1.5 Lens | x3 Crop
« on: February 18, 2020, 02:38:13 PM »
Damn, if it wasn't for the aliasing in some trees and the motion blur difference, I could easily mistake this for some kodak film from the 80s (particularly the 250D 5297). Very good job on the color grading @ZEEK. Can you explain a bit your post-processing steps? Any LUTs? What impressed me the most was the highlight rolloff it has, very film-like.

@masc @Ilia3101 @bouncyball
Did you guys see this new feature on Rawtherapee? This is doing 'magic' with some of my photos:

Might be too dificult to adapt the code to MLVApp, but something like that for video would be awesome. Maybe some day.

Hey @Ilia3101, nice progress on the last commits about the raw wb, blue desaturation and changing reinhard to float. Couldn't test yet (might be able next week), but I'm trying to keep track on those change :)

You don't need an IDT for raw files in Resolve's ACES environment.
I don't think this is accurate. The IDT contains spectral data, while the DNG will contain only a simple matrix. While it's true that you can use ACES color space in any image, ACES is more than just the color space, it's a color management system. We have previously discussed this here.
Best practice is to target whatever colorspace the intended playback device displays and only ever grade for the device colorspace(s) you can physically view i.e. your monitor(s).
True, but you can easily convert Rec.2020 to any other display space without losing information. The same cannot be said for Rec.709 or P3.
Lastly. Do you really need to use ACES? I like ACES a lot and I can see the appeal for multi-cam, multi format shows, CGI and collaborative,cross platform workflows but IMO it's overkill for most things, especially if you don't fully understand how it works and what it's for.
If you're already using Alexa WideGamut, I agree. But going from Rec.709 processing to ACES is a huge jump in color quality.

Nice tutorial, thanks. Some notes:
There's no need to use that, ACES already has Log in it. There's a comparison here between Log-C and ACES.
ACEScg is meant to be used in computer graphics (particularly in green/blue screen). Normally you should use ACEScct instead.
Output Space: Input - Canon - Linear - Canon DCI-P3 Daylight
Output Space: Input - Generic - sRGB - Texture
It's not a good practice to convert color spaces multiple times. Actually ACES was created to avoid these.
Also, the "best practice" now is to use Rec.2020 instead of Rec.709/P3.

General Chat / Re: Looking to the future
« on: January 24, 2020, 12:47:22 PM »
Dude, read the article linked in OP. The article already talks about Fuji "auto" functions. The hypothesis in the article is if the camera will have more functions than that and go "fully auto". My answer is: phone cameras already do that, but this will not happen in professional photography. Just that, simple.
"Auto" does not equal "reads the users mind."
Auto means automate. If you're telling the camera what to do, it's not automated and, by definition, it's not "fully auto".

End of story here, let's move on.

General Chat / Re: Looking to the future
« on: January 23, 2020, 02:41:14 PM »
It's not hard: you simply tell the camera.
Yes, but @garry23 said "fully auto". Telling the camera what to do is not "fully auto".
You will have to choose the focus in post-production anyway, so not fully auto.

General Chat / Re: Looking to the future
« on: January 21, 2020, 10:48:12 AM »
cameras could be fully auto, as they gather the data to allow you to ‘fix’ the exposure and focus in post.
Don't think this will be the case. At least not for professional photographers, because the settings are changed not just to get good exposure, but also to create the artistic effect you want. How would the camera know that you want shallow DOF even if you're in daylight? It won't. How would it know you want to make a long-exposure instead of having 1/1000 shutter speed? It won't.
What it could do, though, is to get the proper exposure and focus to a given scenary, basically what smarphones already do.
- Get shutter speed to the same focal distance of the lens you're using (50mm > 1/50)
- Get ISO as down as possible
- Get the lens wide-open, read the scene and get focus on the main subject.
- Get aperture in the best contrast (according to the MTF data provided)
- Decrease shutter speed until highlights are -1% exposed (ETTR)
- Auto WB
- If it still needs more light, increase aperture. Then, if needed, increase ISO
- Post-processing (normalize exposure, debayer, color conversion, some DCP/3DLUT > JPG compressed file)

The above would give the most sharp and highest SNR photo that the camera can give, but wouldn't account for what you as the photographer wants.

Reverse Engineering / Re: Cleaner ISO presets
« on: January 21, 2020, 07:35:04 AM »
Nice results.
Surely we all want less underexposed and overexposed pixels counts…?
I think the disagreement is just on the accuracy of the measurement method. In your video, how much of this better DR was caused by the ADTG tweak versus changes in exposure? If you're changing the ISO between stock 100 to 108, this will also increase exposure and could account for the changes in DR.
I doubt that's the case though (too big of a noise reduction in ISO 800) and I think these tweaked values should be of easy access to everyone (as simple as enabling a module).
Nice job @timbytheriver.

General Chat / Re: Looking to the future
« on: January 19, 2020, 07:11:58 AM »
For me, any auto mode is useless. But it might be because the companies don't put threshold values. Being able to choose from what-to-what values the camera is restricted to use would make it much more usable. Also, if the auto-mode had decimal control over ISO would help to increase the accuracy for doing auto-ETTR.
Speaking of it, ETTR should be the default in all cameras using auto modes, since they can easily bring exposure down for display previews and compressed images (.jpg's).

Works for me. I'm using FF 72.0.1 on Windows, with javascript disabled and ublock.

Share Your Videos / Re: tommy's tree talks - season 2
« on: January 16, 2020, 11:08:10 AM »
Cool. There's a little too much magenta in the shadows... if it was not an artistic choice, see if you can bring the Red curve a little down, so the shadows get a cyan tone instead of magenta ;)

General Chat / Re: New Canon Info
« on: January 15, 2020, 11:59:12 PM »
Interesting, thanks for sharing. This blog is amazing, there's many other interesting articles there.
Too bad this technology will probably be patented and only Canon will benefit from it. Same path and old story of others like Foveon, Panavision PX-Pro color spectrum filter, and the Arri FSND filter. I understand they need to make money, but we would have much better technology if those companies collaborated between them sometimes.

I dont think it's a big deal because 95% you don't get in touch with moire in general work, just time to time 2-3%
Afcourse it's good for the consumer but it's not a break deal
Well, one of the things analog film is still better than digital is exactly this area. Harsh edges, aliasing/moire plays a significant role for me in reducing this bad "digital feel". Also, this improvement is important for other areas, such as fashion photography.

No I meant does anyone know why Adobe image processing looks so much cleaner, it just shows nice grain instead of noise, while MLV App and other raw converters show blotchy colour ugliness.
Oh, I see. I guess it's either because of default chroma denoise or some trickery with the demosaicing (it uses AMAZE, IIRC, but the implementation might be doing something different). They also probably have better color processing and hot/cold pixel removal, which might affect the looks of the noise.
I think using Rawtherapee as a comparison together with ACR would also be useful, since the code is open and RT is very a stable and complete software.

Share Your Photos / Re: Extreme ETTR example
« on: January 09, 2020, 02:48:40 AM »
Unless you were running from a lion, the best result would be doing proper HDR. DualISO should be reserved for fast-moving objects, like sports and journalism, IMO. If you have a near static image (such as the example from garry) and have a tripod, do HDR blending instead, IMO.

And does anyone know why adobe looks so much cleaner in terms of noise sometimes? It's frustrating.
I get the opposite effect. Can you post an example? AdobeRGB gives chroma noise in saturated cyan/blue for me. The three usable gamuts on MLVApp for me are sRGB, LogC and AP1.

More samples here Ilia (shot with 50D), they might be useful because they also have skin tones in it. If there's anything else we could help in enhancing color processing on mlvapp, let us know:

Share Your Videos / Re: music video shot at iso 118 - updated
« on: December 21, 2019, 12:18:37 PM »
Got it! Mixed lights is very tricky, indeed. On Resolve there's a tool called "Hue vs Hue". You can find the right hue of the lights (I think there's a picker) and then shift them to something close to what you see in real life. Sometimes this will cause other tones to shift too (in your case dark oranges, or other tones of red). In this case you can use the red curve in RGB curves to adjust overall red balance or use a mask to isolate only the red leds...

The processing code uses platform independant standard C libraries, which use char* to describe the filename. char is 8bit.
How about using wstring (const wchar_t)? I've read some people recommending while I was searching for solutions...
(just) for windows :P (very uncool).
Microsoft, as always. If we had an alternative to Premiere Pro in free unix-like systems I would never use windows again.

edit - this guy explains well:
Applications using char are said "multibyte" (because each glyph is composed of one or more chars), while applications using wchar_t are said "widechar" (because each glyph is composed of one or two wchar_t. See MultiByteToWideChar and WideCharToMultiByte Win32 conversion API for more info.

Thus, if you work on Windows, you badly want to use wchar_t (unless you use a framework hiding that, like GTK+ or QT...). The fact is that behind the scenes, Windows works with wchar_t strings, so even historical applications will have their char strings converted in wchar_t when using API like SetWindowText() (low level API function to set the label on a Win32 GUI).

Share Your Videos / Re: music video shot at iso 118 - updated
« on: December 21, 2019, 07:09:41 AM »
That tshirt is the demosaicing nightmare! haha.
Very nice video. Very industrial feeling, as always.

One note: in the 32s scene, the light in the keys are pinkish. But on 1m17s they are red. Don't know if that is inacuracy on the color processing or not. Are you using ACES?

I've used Reinhard 3/5 on my last work. Very pleased with the result, compared to Log-C. Should be the default tonemapping, together with AP1 as gamut. Gives the best results for me.

Feature Requests / Re: What About Black Shading ?
« on: December 21, 2019, 06:57:07 AM »
@Luther Do you think there would be a hit to performance if it's encoding another frame?
Don't think so, as it would just copy the frame inside each MLV. Another solution would be to have a "link" inside MLV metadata to point to the dark frame file. This way MLVApp and other software would know where the frame is, without the need to copy it on each MLV (it would also save some CF space). I like the second solution more because of efficiency.
Another idea would be to automate the creation of such dark frames, because it's more effective when the frame is taken shortly before the shot it will subtract to (for thermal reasons, I think). For example, if you have a event to photograph/record, you could do multiple dark frames using the module throughout the duration of such event. Based on the date of each MLV, the module could link the closest dark frame in this period of time. This would increase the dark frame effectiveness, as noise changes when the camera heats.
The module should also copy the current camera settings (shutter speed and ISO, mainly), because the noise also vary with exposure time and sensibility.
Don't know if averaging would be possible inside the camera, though. But folders could be created to store multiple dark frames at the same time, and then leave this task of averaging for MLVApp or other software (the averaging could be automated too).
Just some ideas.

But Ursa is not like DFA where you need to do it in post, you probably can, but that is not how the built in black shading works.
It works even on Raw? Hard to believe they are doing such low-level processing inside the camera. Applying that to debayered data inside the camera is easier, but on Raw it gets more complicated...

Pages: [1] 2 3 ... 9