Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - skrk

#1
Camera-specific Development / Re: Canon 100D / SL1
January 03, 2019, 12:48:34 AM
Huge thanks to all involved! I can't wait to test it out.

Exciting stuff to see the 100D getting pushed further like this. :-)
#2
Many thanks! [edit: confirmed fixed!]
#3
Just wondering if anyone had a chance to try to replicate the below? Thanks.

Quote from: skrk on June 22, 2017, 09:26:41 PM
Hi - possibly another exposure simulation bug?

When I enable exposure simulation, adjusting the shutter speed with the dial changes normally when it gets slower, but only registers every other shutter speed when getting faster. (The liveview reflects the numbers shown: it changes only on every other click of the dial when getting faster.)

E.g. turning the dial slower it goes:
1/1000 -> 1/800 -> 1/640 -> 1/500 -> 1/400 -> 1/320 etc.

...but turning the dial faster it goes:
1/320 -> 1/320 -> 1/500 -> 1/500 -> 1/800 -> 1/800 -> 1/1250 -> 1/1250

If I go to one of these in-between shutter states and hit the "info" button and go to the canon screen, it shows the 'true' shutter speed (disagreeing with the current magic lantern shutter speed shown) but still shows the brightness in the LV display from the incorrect shutter value.

(If anyone has trouble duplicating I can be more explicit.)
#4
Thank you for the response -- if I wrote a LUA script to do time lapses, would the clock LUA uses be basically trustworthy?
#5
I did some searching re: decimal seconds with the intervalometer, and found some threads, but they are mostly ~4 years old, and I'm wondering what the latest information on this subject is.

My understanding is that the available clock is only a 1-second resolution, so the intervalometer can't be more precise than that; is that still the case?

My understanding is that there are script-based (or custom code-based) ways around this issue, but that they won't be time-accurate; is that still the case?

The main concerns for me are the typical: the difference between e.g. 1s and 2s is a huge halving of the rate of motion, and post-processing the resulting speed is obviously not a good option. It's also nice to be able to precisely time the resulting length of the time lapse when you know how long you will be filming (e.g. this event will take 20 minutes and I want the time lapse to be 30 seconds long at 30fps, so I want  1.33s per image.) It's also nice to get the max frame rate from the camera (my 100D can't do 1s, but can do 2s, so it'd be nice to try 1.5.)

I have done some raw video recording, and am aware of the FPS override, but AFAIK I can't shoot anywhere close to 4K in that mode (on my 100D). That method also means I can't post-process the RAW files in Canon DPP (which means no lens optimizations for me). It also limits the time I can record on a given card due to file size (when compared to shooting JPG).

Any hope? Maybe this has been figured out and I'm just not aware of the fix/script/etc?

Thanks for any ideas!
#6
Hi - possibly another exposure simulation bug?

When I enable exposure simulation, adjusting the shutter speed with the dial changes normally when it gets slower, but only registers every other shutter speed when getting faster. (The liveview reflects the numbers shown: it changes only on every other click of the dial when getting faster.)

E.g. turning the dial slower it goes:
1/1000 -> 1/800 -> 1/640 -> 1/500 -> 1/400 -> 1/320 etc.

...but turning the dial faster it goes:
1/320 -> 1/320 -> 1/500 -> 1/500 -> 1/800 -> 1/800 -> 1/1250 -> 1/1250

If I go to one of these in-between shutter states and hit the "info" button and go to the canon screen, it shows the 'true' shutter speed (disagreeing with the current magic lantern shutter speed shown) but still shows the brightness in the LV display from the incorrect shutter value.

(If anyone has trouble duplicating I can be more explicit.)
#7
Thanks; I just got a crash/hang, with the shutter speed not set to BULB, but exposure simulation was enabled:

I think this is the log from that crash; anyway it's a different log; this time at :810 --

QuoteASSERT: pReturnData->Engine_Address != 0xFFFFFFFF
at ./LvCommon/LvGainController.c:810, task Evf
lv:1 mode:3


Magic Lantern version : Nightly.2016Oct05.100D101
Mercurial changeset   : 60a2c84ce70d (100D_merge_fw101) tip
Built on 2016-10-05 04:56:52 UTC by ml@ml-VirtualBox.
Free Memory  : 532K + 1793K
#8
Thanks, all. @nikfreak -- I just tried again and it worked at first; then I set ISO to 6400 and it crashed. I wonder if it has to do with settings on the canon side...

This time a crash report showed on screen instead of just being blank; from the crash log:

Quote
ASSERT: 0
at ./LvCommon/LvGainController.c:806, task Evf
lv:1 mode:3

Magic Lantern version : Nightly.2016Oct05.100D101
Mercurial changeset   : 60a2c84ce70d (100D_merge_fw101) tip
Built on 2016-10-05 04:56:52 UTC by ml@ml-VirtualBox.
Free Memory  : 576K + 1974K

Some of my camera settings at the time of this test crash:
RAW, ISO 6400, ALO off, white balance tungsten, evaluative metering, high ISO NR on max, AF method FlexiZoneAF.
#9
Hi -- reporting a bug that seems to cause a crash/hang/breaking of ML when exposure override and the bulb timer are both enabled:

- install a fresh copy of the current "Nightly-.2016Oct05.100D101.zip"
- open ML menu, enable Expo. Override, enable bulb timer, exit ML menu
- enable LV, adjust shutter speed to BULB
- disable LV, enable LV

Camera hangs with black screen; can't be turned off or on, requires battery removal.

It seems to be the combination of exposure override, bulb timer, BULB shutter speed, and using live view that causes it. If you have trouble reproducing, just adjust the settings as described above and go in and out of LV a few times... it happens pretty reliably.
#10
Quote from: LittoD on April 21, 2017, 05:05:44 AM
Hi there,
first time coming to the forum. I have us ML for a while now as a supplement to regular Canon recording. I cant ever seem to the the RAW to work for me. Im committed to figuring it out though. Lately I have been getting this overlay of black crosses (focus marks?) that show up every time i convert from RAW to DNG. I use Resovle to open them up and RAWlite to convert. I had RAW working before once without these issues.

Are you converting to DNG with MLVProducer? If so just make sure to use the "remove focus pixels" setting when converting. Sounds like that's the issue?
#11
Huh, well: there does seem to be noise down in the low levels, below the image data, as I suspected. If I export to a linear-gamma TIFF I can see it down there. If I manually crush the lowest levels to zero, that noise goes away.

I can see that setting the black point in MLVProducer (which defaults to 2044 for my camera) does not simply clip the levels below 2044 to black; it does seem to "squash" them into the low end. (To test this I exported a series of linear tiffs at black point gradually increasing from 0 to 2044 and watched the histograms as the noise hump is gradually squashed closer to and eventually in to the image data.)

I assume there are good reasons for that, as most of raw development is still above my head. When I take a linear tiff exported with blackpoint set to 0, I can manually crush that low-level noise to zero, but when I then try to manually scale the blackpoint and apply an sRGB gamma curve, the results are a disaster. That low level noise may be gone, and the overall brightness of the image is good, but tons of new noise is present. Certainly not an improvement. :-)

I have no idea why that happens, but clearly there is a lot of extra magic going on that escapes me. So for now I'm gonna have to go back to school on this stuff and not worry about that noise.

I do think that I've learned that my complicated notions of how the dynamic range of raw images is fit into the dynamic range of sRGB were misguided. Apparently sRGB 0 maps to XYZ Y=0, and sRGB 1 to XYZ Y=1 -- meaning, apparently you just linearly scale the DR of RAW into sRGB (plus whatever adjustments you make while developing, black point, etc) and the output device imposes whatever DR limitations it imposes... I assumed something much fancier was going on. Lots to learn. I'm wondering now if the link I posted before that claims that any TIFF or JPG has less dynamic range than a DNG/RAW/CR2/etc is actually correct: it doesn't sound like there is a DR defined in the sRGB standard, surprising as that is to me: I would have expected the black and white points to be referenced to some output illumination strength.

Thanks to those who helped.
#12
D'oh -- just realized I had the sRGB gamma curve backwards in my last gif, above! That was the reverse transformation, per wikipedia.

The proper gamma curve (sRGB = 1.055*x^(1/2.4) - 0.055) makes it feel like my concern is unfounded: wherever the RAW noise lands (if it lands at all), and however the gamma curve is applied to the linear values, the lowest-end noise will only be separated further from the image data (since slope > 1), afaict.

So I'm learning a bit more about matrices and it sounds like the DR of the RAW form is maybe only squashed or not depending on how sliders are set in MLVProducer, and that if you develop a RAW without setting a white/black point you are essentially containing the whole DR of a RAW image in the DR of sRGB, and the DR will then be limited by the output device's DR?

Anyway, I'm going to play with some sliders in MLVProducer and make some histograms in python and see if this all lines up.
#13
Thanks again, everyone --

I do want to post my code at some point, but right now it's not doing anything related to what I'm asking about in this thread, so I don't think it will help anything :-) -- right now it just processes 16bit sRGB tiffs, and I'm just wondering if I have an opportunity to make it better, and I think that depends on how RAWs are developed:

QuoteYour graph implies some kind of compression which to me appears to be a kind of lift (soft knee, 5% lift?). Of course this would increase the noise as the noisefloor gets lifted and it doesn't matter to what you convert to afterwards, the noise it there.

That graph doesn't indicate what I'm doing, it was intended to depict what my imagination is wondering about MLVProducer (and others): everything I've read tells me that translating to sRGB (or any other non-RAW color space) is going to compress the dynamic range. I'm just not clear on the specifics.

Maybe this will illustrate my question better. Which of the the following three methods (if any!) roughly approximates what happens to overall brightness values as a RAW is converted to sRGB -- A, B, or C?:

UPDATE: D'OH - NOPE. This graph is using the wrong gamma curve (the reverse transformation) -- IGNORE.



So my question is just about how MLVProducer (and presumably all the other RAW->sRGB converters in the world) convert the large DR of RAW into sRGB.

If either method A or B are correct, then I will wonder if the noise floor in the low end of the RAW range is being compressed into the low end of my image data, and I'll be interested to investigate further (e.g. to see if there is even noise down there that matters in the first place, or if it's already overlapping my image data anyway). If method C is correct, then I don't need to worry about it.

This link (wikipedia) specifies the 3x3 matrix and the gamma curve, and it seems to imply that method "A" above is roughly correct, but I don't know linear algebra and so I can't guess what that matrix is doing to the brightness.

And if I'm just off in crazy land on all this, then tell me to go back to school and I'll stop wasting your collective time. :-)
#14
Raw Video Postprocessing / Re: MLVProducer
December 27, 2016, 10:27:26 AM
I can't seem to open DNG's that I export from MLVProducer in anything (dcraw, ufraw, darktable, rawtherapee, etc). I wonder if it's because the ufraw that Ubuntu repositories provide is apparently not compiled with zlib support?

Quote$ ufraw-batch --version
ufraw-batch 0.22
EXIV2 enabled.
JPEG enabled.
JPEG2000 (libjasper) disabled.
TIFF enabled.
PNG enabled.
FITS disabled.
ZIP enabled.
BZIP2 enabled.
LENSFUN enabled.

...maybe it would be nice to have an uncompressed DNG export option?

There are various zlib packages in the Ubuntu repos, and I have zlib1g installed, but I'm not sure which if any of the others might help the problem...

Thanks!
#15
Thanks, both! --

In case it wasn't clear from above, I love RAW and use it all the time; I'm not talking about picture development here, I'm talking about custom code that I've written that I want to make as effective as possible. Python seems to have the rawpy and rawkit modules, both of which look good, and which seem like they would enable me to develop the raw video frames into linear-gamma sRGB arrays that I could then process... but I'm a bit lost on then changing the RAW/linear gamma into sRGB when I save the sRGB tiff at the end. I can figure it out, I just don't want to spend a week doing research if there's no point to begin with.

My code is calculating based on values in the current frame and previous frames... and the nature of the calculation is that low-level noise builds up over time. Currently I have to crush the low end of each frame of the histogram with an adjustment curve before processing. It works fine, but I sacrifice some details in the shadows' falloff, etc. I got to wondering if I could keep my image out of the noise floor by using a smarter transfer function.

This is why I'm concerned about throwing away DR -- I don't really know how the noise distribution looks in the RAW histogram (that's something I will explore next) but if the conversion to sRGB is combing the noise floor with the low-end of the image, that would be a place I could improve my process.
#16
Thanks for the replies! -- @kontrakatze, I'm aware that the sRGB primaries are pretty close to Rec.709 and that the gamut is more or less the same. I'm not worried about color information, I'm worried about the gamma (or maybe "transfer function" is the correct term?) (And I don't use a DNG for various reasons: i don't own Adobe software, and i'm not (yet) sure how to process it in Python, but maybe I should look at that!)

I know that sRGB defines a gamma curve, i'm just not clear on how that relates to the full dynamic range of a RAW frame. My impression is that Rec.709, and possibly also sRGB, squashes a lot of the dynamic range into a tighter range -- that's the whole reason for log formats, as I understand it. Further, my understanding is that sRGB especially squashes the under/overrun tighter than the middle, which makes sense.

Here's an illustration of what I'm visualizing might be happening:



This link seems to confirm @whyfy and my suspicion about DR compression: https://blogarithms.com/2012/01/09/wasting-dynamic-range/comment-page-1/

...it shows that using TIFF or JPG or any other non-DNG (/non-HDR) format, in any color space (ProPhoto, sRGB, whatever) will reduce dynamic range when coming from Lightroom, so I guess it's a pretty safe bet that MLVProducer will do the same.

My eventual end format is sRGB and/or Rec.709, but the reason I'm caring about this is that my processing needs to keep the useful image data out of the noise floor as much as possible. If the DR compression is combining the noise floor with the low-end of the image data, I'd like to avoid that. I don't know if the above diagram of the noise vs. the image data is accurate -- maybe most/all of the noise is already overlapping the image data and the DR compression won't affect the SNR -- but I want to avoid it if it's possible.

It's a hard thing to test because I don't (yet) have the ability to reverse any of the log curves in software. I could try to figure that out, but I don't want to put in the effort if there's no potential gain from it.

I ran a test where I exported the same frame to 16bit TIFF-sRGB and 16bit TIFF-slog and compared... after pulling the images up in GIMP and crudely tweaking the curves/levels of the slog version, it didn't seem like there was any more detail present in the low end, or any less noise... so I'm still left wondering if this is worth pursuing. But if the above diagram is accurate, then it would be...

Maybe I'll look in to setting up a DNG workflow... RAW formats intimidate me (in terms of writing code to process them). :-)
#17
"Just let us know if you can replicate again and how to do so."

Will do.

"Btw: your link contains your ROM0/1 dumps. Please remove those."

Done, sorry, thought they might be useful.
#18
I had a number of crashes. I'm sorry that I don't have instructions for reproductions: it seemed to be sporadic. Would result in lockup requiring battery (and card?) removal. At one point a card seemed borked as well: I re-formatted it and it was ok.

It was a while ago, but IIRC it happened when fps override was enabled and I hit the button to zoom in to focus (or perhaps when I was then zooming back out.) It seemed possibly connected to having a bulb timer override set as well (>5m). ML raw video module was probably enabled. It's also possible that it had to do with the bad SD card; I reformatted and the problem seemed to stop, but that could have been coincidence or related to ML settings or whatever. (E.g. it may have happened with another card -- it's all so fuzzy now -- I'll take better notes next time for sure.)

Sorry this is so completely vague :-). I don't know if these crash logs are useful or not, but they are linked here in case they are at all useful. And of course if I ever manage to find a reliable reproduction I'll let you know.

The first few crash logs list:

ASSERT: pReturnData->Engine_Address != 0xFFFFFFFF
at ./LvCommon/LvGainController.c:810, task Evf
lv:1 mode:3

The last few list:

ASSERT: 0
at ./LvCommon/LvGainController.c:806, task Evf
lv:1 mode:3
#19
Hi - I am doing some simple processing of frames from raw video taken with ML.

Currently I use MLVProducer to remove the focus pixels and convert all the frames to 16bit TIFF, sRGB.

I process these with custom code: this code converts the 16bit integer TIFFs into floating point, does math, and then writes them out as 16bit integer TIFFs again. The noise floor is a big issue (long story: low-light exposures, so most frames are very dark). I do lots of work to remove the noise, deal with it, work around it, etc. (FPN noise removal, adjustment curve, etc.)

Everything works fine, but I want to make sure I'm doing as much as I can in terms of the noise floor.

It occurred to me that I might be throwing away dynamic range by converting the RAW video to sRGB -- is this correct? Does this happen when MLVProducer converts RAW video to sRGB, or is that compression (or non-compression) just controlled by the sliders? In other words: is a (small) dynamic range built in to the definition of the sRGB color space? (My impression is that Rec.709 does have DR built in to it?)

If DR is being compressed, would it be smarter for me to instead convert the frames to some kind of log format? Then when I convert to floating point I could reverse the gamma curve and process as normal without having thrown away dynamic range?

My main concern is that when I convert to sRGB, I'm squashing the dynamic range and thereby combining the noise floor with the lowest end of my image data. Am I crazy?

Any help is appreciated, thanks!
#20
Raw Video Postprocessing / Re: MLVProducer
November 12, 2016, 10:32:32 AM
Yeah, brilliant! Works perfectly now, thanks!
#21
Raw Video Postprocessing / Re: MLVProducer
November 12, 2016, 03:41:07 AM
Sure!

Here is the MLV (213MB): http://lacinato.com/pub/M11-1826.MLV  update: smaller .zip version (126MB): http://lacinato.com/pub/mlv.zip
Here is the .fpn: http://lacinato.com/pub/demonoise.fpn

I made the fpn from the last few seconds of the video (during which I covered the lens with a card).

In MLVP, if I set the output space to sRGB and gamma correction to 0.52, I note that:

- enabling only focus dot removal removes focus dots
- enabling only FPN removes focus dots and removes some noise
- enabling focus dot removal and FPN removal removes focus dots, removes some noise, and adds some hot pixels
- enabling focus dot removal and FPN removal and enabling "remove hot pixels" removes focus dots, removes some FPN noise, removes the hot pixels, and adds some noise

This was taken with a 100D (Rebel SL1) and the kit 18-35 lens in a dark-ish room at f3.5 and ISO 100.

Thanks! Let me know if I can provide anything else.
#22
Raw Video Postprocessing / Re: MLVProducer
November 11, 2016, 09:40:40 AM
QuoteInteresting ... could you eleborate more on this? What exactly does your code do during post-processing? If you don't mind me asking if you could share with us -- that would be awesome and thanks for sharing @skrk!

Thanks for asking -- I'd normally be eager to share but in the short term I'll have to keep quiet about it. It's nothing that would impress serious coders/visual FX people, but i'm working on a project with it and would like to do that first. :-) The dark frame part of it is dead simple (just a straight average) and probably no different than MLVP, except that it works after the focus pixel removal on the frames cleaned up by MLVP... something that is important in my workflow but probably unimportant for most use cases. Short version: probably nothing useful to anyone, but I'll share when I can. :-)
#23
Raw Video Postprocessing / Re: MLVProducer
November 10, 2016, 09:10:51 PM
QuoteI suppose I could use MLVP to remove the focus pixels, and then do my own dark frame subtraction in processing

...update on the above: I have done this and it works well: I do focus-pixel removal in MLVP, but no other processing. Then I have custom code that computes an average dark frame and subtracts it during my post-processing, and it works very well. So perhaps that indicates that something could be changed in MLVP in terms of how the FPN is applied?
#24
Raw Video Postprocessing / Re: MLVProducer
November 09, 2016, 01:42:15 AM
Hello --

So, I'm learning a bit more about FPN. Here's my current issue -- FPN works great. Focus-pixel removal works great. But they seem to combine strangely?

I found this because I'm doing some custom video effect stuff that requires low-noise images because it does a lot of addition of frames, so if there is noise present it adds up and becomes visible. I noticed that a series of still raw images from the camera work fine, but I've been trying to use MLV video as an alternative route, and it doesn't work (yet!).

If I build a FPN profile and enable it, the noise drops significantly. But if Focus-pixel removal is also enabled, it also causes a bunch of red hot pixels to be created. I then enable the hot pixel removal, and that removes the hot pixels but it seems to add a bit of gray noise as well, as shown at the end of the gif below. (When I say "hot" here, I don't mean truly hot, just hot when I sum a lot of frames.)

Is this normal/expected, or is it some kind of error in the order of operations being done?

For example, it seems like the FPN also includes the focus pixels, so if I'm using both perhaps there is a math error when the FPN subtracts focus pixels that were already removed, which in turn causes the hot red pixels?

Here is an animated gif that shows the effect. I'm making the issue visible by turning up the gamma correction. Perhaps increasing the gamma is ruining the value of my experiment, but I do see the same scenario when I run them through my process (hot red pixels when I use FPN and use remove-focus-pixels but don't remove hot pixels), so I'm reasonably suspicious that it's related?

I suppose I could use MLVP to remove the focus pixels, and then do my own dark frame subtraction in processing (BTW -- it'd be great if there was a way to export the computed FPN frame -- if it didn't include the focus pixel removal also, that is.)

Thanks for looking at it! And let me know if I'm misunderstanding anything, or if a demo MLV would be useful.

#25
QuoteHow did you determine that the dngs are half resolution?

D'oh, sorry -- it's a viewing issue with ufraw and related libraries. If I open a DNG in ufraw and convert to png, it's fine, it's just when opening the .dng with image viewing programs (which use ufraw to view it) that it shows as half the expected resolution. My mistake.