Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Audionut

#2
Quote from: mlrocks on August 03, 2021, 06:37:22 AM
Thanks for the reply, MASC. Probably this auto ETTR + dual ISO is more useful for landscape than for street videography or portrait.

dual ISO is only for high dynamic range scenes, since it doesn't come for free and reduces resolution in the highlights and shadows.

ETTR is predominantly a highlight priority mode, with auto ETTR being an automated version. There's some nifty features of auto ETTR that will be useful for your situation though.

The first being Midtone SNR Limit, which will protect the midtones at the expense of the highlights.

Then you could try the auto ETTR option, Link to Dual ISO if the scene might have high dynamic range.

If you're happy to use the default Canon exposure meter, and skin tones are really important, set to camera to spot metering mode and meter the skin tone.
#3
Raw Video / Re: AI algorithms for debinning
August 23, 2021, 03:29:32 PM
I've briefly used a friends AR7, and the most striking thing for me was how easy it is to take an image with an exposure that in no way shape or form resembles that shown on the liveview or EVF. Surely there must be a setting to adjust that, but in any case, it was interesting being able to change the shutter 4 stops and have both the liveview and EVF show the exact same brightness.

With a regular old bouncy mirror VF, there's always the exposure meter. And while it too has it's quirks, it's never been 4 stops or more wrong! Anyway....

For the same DOF and FOV, f/4.0 on a FF is equivalent to f/2.5 on the crop. All else being reasonably equal, I'll take an f/4.0 lens (or same lens at f/4.0) over a f/2.5 lens, solely because of sharpness.

If DOF isn't an issue, then I'll take the 1 and 1/3rd stop more light hitting the sensor, without even blinking, but that's just me.
#4
Quote from: NightlyMattya22 on March 17, 2021, 12:46:33 PM
  But wouldnt there be a lag?

IIRC, the whitepaper says it can do 60fps.

Quote from: NightlyMattya22 on March 17, 2021, 12:46:33 PM
if I am right, simultaniously outputs two different set of amplified signals to proccess. I assume that is what Canon is doing?

Depends on exactly what you mean. For this discussion, exposure means one actuation of the shutter.

Samsung smart phones does HDR with 2 different exposures. This must be the case because of the motion artifacts.

Canon is doing HDR with one exposure. Shutter opens, sensor captures the signal. This signal (the exposure) is then stored in a memory.
From there, it is amplified twice. Once at minimum amplification to captures the highlights, and once with a higher amplification to capture the shadows.
These two differently amplified images (of exactly the same exposure) are then blended together to deliver the final HDR image.
#6
ML dual_iso is as a result of reverse engineering. It was noticed that there were more then one amplifier (attached to the sensor) that could be adjusted separately. This allowed dual_iso, but does have some drawbacks.

Canon's solution is bespoke. It is likely that there are still more then one amplifier (column gains iirc?), however, instead of finding more DR by adjusting these amplifiers separately, with all of the drawbacks associated with, Canon doesn't flush the photosites when they are first read, and instead retains the photosite "memory" for more than one amplification pass.

ML....
2 lines of image signal at ISO xx
next 2 lines of image signal at ISO yy
next 2 lines of image signal at ISO xx
next 2 lines of image signal at ISO yy
next 2 lines of image signal at ISO xx
next 2 lines of image signal at ISO yy
etc...
etc...


Then a post application is used to match ISO yy brightness to ISO xx brightness.


Canon scans the entire image at ISO xx, then scans it again at ISO yy (and presumably could continue to scan it however often they want, to a point). Basically this is ML dual_iso with none of the drawbacks.
Oh and I assume all of the post processing is done in camera. Would be nice if they allow a raw of each scan to be saved.
#7
Camera-specific Development / Re: Canon 650D / T4i
March 12, 2021, 12:14:22 AM
Some development work and extra builds are available in this thread.
#8
Looks like it broke somewhere in between build #5 and build #6.

Quite a few changes pushed in #6, including the merge of lua_fix. No time to pursue further, but there's a working build from the lua_fix experiment.

I would hunt back through lua_fix and find the relevant changes in that branch.

edit: log shows 500D should have been fixed in #451, which includes the commit you already found.

500D broke here: https://builds.magiclantern.fm/jenkins/view/Experiments/job/lua_fix/446/
#9
Camera-specific Development / Re: Canon 7D Mark I
February 21, 2021, 11:06:56 AM
dfort continued development work on 7D over here.
#11
General Chat / Re: Any thoughts on this idea?
February 20, 2021, 06:19:12 PM
Quote from: Walter Schulz on February 20, 2021, 05:31:04 PM
Any other method than firmware update to make it run won't be covered.

I don't think that part has any effect on the patent per se, simply that this is patent trolling, and specifically requires the manufacturers to implement it.

Hence the statement......

QuoteThat is of course after it has been licensed from Rockwell and Langlotz. Rockwell is encouraging those interested in the feature to contact Canon, Nikon, Sony, and Fujifilm directly and ask for this feature, and has provided Langlotz's contact information that can be sent to these companies as part of that request.

Crowd-sourcing patent trolling. What a world we live in.
#12
General Chat / Re: Any thoughts on this idea?
February 20, 2021, 05:11:56 PM
"Fancy" digital zoom, is still just digital zoom.

QuoteRockwell explains using a 100-400mm lens as an example. He says that in this particular example, a camera would shoot normally (full-frame) from the 100-300mm range of that lens but as the photographer approaches the 400mm end, the camera would intelligently apply an APS-C crop until the full zoom length is reached, effectively turning the final zoom into 800mm.

In other words, 3/4 of the optical zoom = 100-300mm, whereas the last 1/4 of optical zoom = 300-800mm.

The 16-35mm example is even betterer. 3/4 of optical zoom = 16-30mm, whereas the last 1/4 = 30-75mm. In either case, 1/4 of the optical zoom adjustment creates a massive zoom adjustment over that narrow range.

When it comes to quality, more pixels capturing image = better quality. This effectively creates a situation where the "system" uses digital zoom instead of optical zoom. The graph is a good example. At 162mm of optical zoom, this "system" will create an image with an effective zoom of 200mm, using digital zoom, instead of the available optical zoom.

Pffft......

It's a gimmick. And they want to cash in on it.
#13
General Chat / Re: Any thoughts on this idea?
February 20, 2021, 12:47:44 PM
Quote from: Kharak on February 20, 2021, 12:22:40 PM
Why not crop in post?

Exactly. Feature request denied  :P

It seems like it would be awfully sensitive once the crop zoom kicks in. You will also lose image quality once the crop zoom kicks in.

#14
Raw Video Postprocessing / Re: UGLY clipping samples
February 02, 2021, 01:08:13 PM
Quote from: a1ex on February 02, 2021, 08:53:12 AM
The best way to deal with these issues, in my opinion, would be with some kind of gamut compression - but you'd have to carefully choose a suitable color space for that. In particular, I can tell you for sure that CIELAB is not the right color space for this purpose (long answer in the links shared earlier).

There must be some form of mapping, to map the out of gamut colors inside of the expected display gamut. But how do you map that?

Do you simply drag all of the out of gamut colors into the end color space!
Do you apply some sort of perceptual mapping, where you not only map the out of gamut colors to the end color space, but also apply some mapping to the rest of the color to retain some form of perceptual linearity!

What about correcting for brightness as the saturation changes.

https://www.sciencedirect.com/topics/engineering/perceptual-colour-space

Color science is way, way above this plebes head.

I only very rarely follow along with madVR development, but madshi has to tackle a similar problem for his HDR > SDR conversion.
Quote2) Let's say you have a highly saturated red color which the HDR Blu-Ray has encoded with 4,000 Nits. And your display actually *can* do 4,000 Nits. No problems, right? Actually yes, BIG problem, because the display peak Nits capability is for white, not for red. So what should a tone mapping algorithm do now? Should it make the pixel white? It could achieve the wanted 4,000 Nits, but the pixel's color/saturation would be completely lost. Or should the tone mapping maintain the full saturation/color, and lose all the Nits it can't handle? Then a significant amout of highlight punch & detail would get lost. So what should we do? In madVR you can choose. See option "fix too bright & saturated pixels by".

QuotemadVR's tone mapping works like this: If you actually tell madVR the proper peak Nits value that you measure your display as, all the pixels in the lower Nits range (ideally from 0-100 Nits) are displayed absolutely perfectly, in the same way a true 10,000 Nits display would show them. Tone mapping only starts somewhere above this lower Nits range. However, we can't simply jump abruptly from 0 compression to strong compression, so the tone mapping curve needs to start smoothly, otherwise the image would get a somewhat unnatural clipped look.
#15
There's one around here somewhere.

What is the issue?
#16
I'm not entirely sure of what the status of this project is at currently. I don't have time to read 27 pages sorry.

I've moved this thread into the new section. I would appreciate if someone wants to take a leadership role with the creation of fancy OP, with updated build links, a short summary, that sort of thing.
#17
As you may or may not be aware, "official" ML development has slowed significantly in recent times (background reading).

There have been contributors to the project who have continued to develop in their free time, and this section is dedicated to those contributors who develop and release builds.

Three core considerations when using this section.

They may work perfectly fine, they may melt your camera in half. They may appear to work, until you develop the footage and discover you just wasted an entire day. The onus of responsibility lies solely with you if you decide to use these builds. Don't use these builds in any sort of professional capacity unless you are sure of your workflow.


  • The usual caveats apply. If your camera breaks in two because of the code given freely here, you get to keep both pieces of your broken camera.
#18
Other experimental builds / Latest Lua updates + fixes
January 28, 2021, 02:15:08 AM
Latest Lua updates (details).

Includes many other backend changes, e.g. focus, menu, Q button handling, fonts etc.

Therefore, it's important to give it a good try on all functions, not just Lua, so we can include it in the nightly builds.

Also includes lens.focus_pos and dynamic-my-menu.


Download / Source code / Technical discussion
#19
Other experimental builds / Non-CPU lens info
January 28, 2021, 01:50:51 AM
Set lens name, focal length and aperture for manual lenses. Lua script.

Download / Source code / Technical discussion

Quote from: Lars Steenhoff on October 29, 2016, 12:04:45 PM
If we can assign lens focal length and name for non cpu lenses, ( like using nikon lenses on a canon with an adapter) then I can use this data in post processing to identify what lens was used and which lens profile I should apply for distortion correction.
#20
Should work on top of latest nightly build.

Download / Source code / Technical discussion

Quote from: mk11174 on April 11, 2016, 03:10:19 PM
I was trying to film dogs catching Frizbee the other day, and was thinking, I wonder if there's a way to add a buffer record to Raw_Rec module, like have it record to buffer but not to file for the certain amount of frames the user chooses as a buffer, then when the action you want happens, you press the shutter and it records the buffer to video file?
#21
Other experimental builds / 10/12-bit RAW video
January 28, 2021, 01:42:13 AM
Experimental raw video recording at lower bit depths. Only models with CONFIG_EDMAC_RAW_SLURP/CONFIG_EDMAC_RAW_PATCH are compiled.

Download / Source code / Technical discussion
#22
Experimental builds based on CMOS/ADTG/Digic register investigation on ISO.

Quote from: a1ex on January 10, 2014, 12:11:01 PM
Just a small improvement in dynamic range in photo mode (around 0.3...0.5 0.8 stops). We were able to fine-tune the amplifier gains in order to squeeze a little more highlight detail.

Download / Source code

Feel free to discuss build related issues in this thread, and if you would like to contribute with some more technical related things, the original thread is probably best suited.






Some other experimental work is done by timbytheriver, Cleaner ISO presets.
#23
OP appears offline since October, including at his youtube channel, hope all is well.

Going to lock this for now. PM me if status changes.
#24
Quote from: Danne on January 25, 2021, 06:02:15 PM
No need for regulating anything.

Relegate vs regulate, while those who regulate could possibly be called regulators.

Someone who relegates is possibly something like this.
#25
Scripting Q&A / Re: Changing Subject description here
January 25, 2021, 05:47:04 PM
Quote from: Danne on January 25, 2021, 02:26:36 PM
Fully possible to change Subject:(check my title above) while posting another lua dilemma within the same post,

That will end messy.

Quote from: Danne on January 25, 2021, 02:26:36 PMIf a lua issue were to be handled over time and "fixed" and worked upon, single posts would be great, but that´s not really what´s going on here. More like lua urges on a strictly personal level and only person with answers is a1ex atm.

Which is why the questions should stay on the forum, and not somewhere in among a potentially endless chat log.

Quote from: Danne on January 25, 2021, 02:26:36 PM
Keeping all at one place would be easier for others looking for lua answers, and in this case, mostly still image related.

I'm not entirely sure how browsing through a multi-page thread is easier. Try and find something specific in the ADTG/CMOS thread for example.

There's a difference between keeping a specific script relegated to a specific thread (questions/examples/etc/etc), and keeping all manner of random questions, loosely tied together because "LUA", in a single thread. Not always do questions get answered. How far through a thread should someone check, before they realise a question was not answered?

Quote from: Danne on January 25, 2021, 02:26:36 PM
Also seems more community friendly if the author owns a deepened post and then he can pass answers to future lua "noobs" and contribute himself pointing to the coming mega lua post ;).

Until one of a potentially one hundred and million one things happen that cause a member of the community to no longer be able to participate. Agreed fully with a "mega post", but that is something I could do, when specifics are easy to reach. I would not dig through a 50 page thread to determine where the questions are, and where the resulting answers are.

Anyway, the entire LUA section is practically Garry's section. I'm not going to merge threads, nor archive the the Q&A section on the expectation that single contributors, need to relegate their discussion to a single thread. As far as I'm concerned, Garry can keep on doing what Garry does best, however he sees fit.