Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - garry23

Pages: 1 ... 85 86 [87] 88 89
Feature Requests / Re: Focus presets ?
« on: January 12, 2014, 05:34:35 PM »

This has been requested before (my me) but the idea got little traction. I found another solution using a TCL script running from Smartshooter on my Venue-8 Pro Windows Tablet.

My scenario is that I wish, for a given FL (I usually use 24mm), to carry out a focus stack, ie for landscapes.

For my 5DIII I have worked out I need to take three images at 3, 5 and 15ft, at F/10.

I initially pre-calibrated the lens with a few marks (you may wish to try this).

The Smartshooter solution hints at how ML could do this, ie in a script.

In other words, assuming you have an auto-focus lens, you get Smartshoort (or an ML script) to drive the lens first to its end stop (say the macro of the infinity end), then step to the correct focus distance, ie in my case 3, 5 and 15 ft.

I know how many steps as a pre-calibrate the lens once.

As I say, once scripts are running on my 5DIII I will give this a try, until then Smartshooter is the next best thing to ML....IMHO



Although this nay be a little off topic, this article gives sound useful insight into getting minimum noise data in the camera:



Hardware and Accessories / Re: DSLR Controller
« on: December 19, 2013, 05:00:57 PM »
I have promote remote and CamRanger. Plus, I just got a Dell Venue 8 Pro. This allows me to tether to my 5DIII in the field and control the camera from the Canon EOS Utility.

Of course my dream is to control ML as well.

Feature Requests / Re: Focus Stacking
« on: December 14, 2013, 02:18:02 PM »
I will add to my own postl as my request may have confused some. The request is to allow ML to drive/step an auto focus lens from a fixed and known position, such as the infinity stop, to specified focus locations.

These locations would be pre calibrated. At the moment I do this manually via marks on my lens. That is I have premarked my lens at the focus stack locations I need for landscape. Thus is not a nacro stack request.

Feature Requests / Focus Stacking
« on: December 11, 2013, 01:50:06 AM »
For those that desire full tack sharp depth of field, the only way is to carry out focus stacking. For example, even shooting at the HFD will not give full tack sharpness, really acceptable out of focusness. As an example, on my 24-105mm at 24mm, if I choose a blur spot, ie CoC + diffraction, of 20 microns, the HFD is about 14ft at around  F10. However, the defined sharpness zone is 'only' down to about 6ft.

I know some are saying that is fantastic, but if you want to cover down to a couple of feet, say, you would need to focus stack. For example, at 24mm on my 5DIII,  I would need to take three shoots at 3, 5 and 15ft.

All my calculations are made using TrueDOF-Pro, Optimum-CSP and FocusStacker.

Ok the 'request' is this. Can the developers give us a way to specify a set of focus distances, ie three in the example above? Ideally the user would be able to specify a few FLs and the related focus distances, say up to 9, so this can be easily selected from a menu.

I hope the above is clear and that it hasn't been requested before.

I appreciate this would only work for some lenses.

If enabled it would be like focus bracketing, but at specified distances.

Feature Requests / Re: Enhanced ETTR request
« on: October 30, 2013, 01:46:06 PM »

I fully appreciate all that has been said, eg about night shooting. I am not suggesting the AETTR tweak as a replacement for other bracketing protocols.

I see the proposed tweak as a way to give the user an option to bracket within the AETTR.

Thinking about a bracketing strategy, I wonder if it would be better to try something like this. If the base exposure is more than, say,  X EV away from the AETTR calculated exposure, then insert as many extra brackets at, say, Y EV as required, where Y is user defined.

Thus if the user enabled the additional bracket feature, ie used a non zero value of Y, say, 1, 2, or 3, the AETTR  calculation would work out how nanny additional brackets to insert.

I believe this AETTR addition would extend the power of AETTR and cover the situation, for example, in a church, where you would meter the windows and adjust for the correct zone, then AETTR would take an image for the windows, calculate the AETTR and insert 'extra' brackets at Y Ev as required, ie to fill in. Using this strategy the AETTR only inserts additional brackets if the extreme brackets are far apart, say, greater than Y + X Ev.

In this mode the user is expecting brackets, if needed. This feature would be switched off if Y was zero.



Feature Requests / Re: Enhanced ETTR request
« on: October 30, 2013, 04:05:47 AM »
I'm sorry if I'm confusing you. As we know our camera sensors will max out at the extremes for a large DR.

The power of the AETTR module is that by starting with a protected highlight shot, the AETTR algorithm will give us a reasonable image at the 'other end'.

Many times the delta exposure between the highlight shot and the AETTR shot is too great, eg more than say 2EV.

By allowing the AETTR module to insert some intermediate shots, we 'fill in' between the extremes.

How the user then uses these brackets is a post processing choice.



Feature Requests / Re: Enhanced ETTR request
« on: October 30, 2013, 03:11:39 AM »
We mustn't get confused by using HDR terms.

What I am proposing is using the AETTR module to create a bracket set to cover the DR of the scene.

If you then wish to use this bracket set by throwing it at an HDR programme that is up to you. You can also use the brackets with a Enfusion engine or manually blend in photoshop.

Feature Requests / Enhanced ETTR request
« on: October 30, 2013, 03:03:23 AM »
Currently we have three great ML ways to guarantee we take a set of brackets that cover a scene's  required dynamic range capture, ie the important highlights and the important shadows.

First we can meter and calculate the required bracket set then uses ML bracketing.

Second , we can use auto bracketing in ML, but 'loose' control of the number of brackets.

Thirdly, we can use AETTR, meter for the highlights and adjust from the 18%, zone V to, say, zone VII, use this as the base exposure and invoke AETTR to capture the calculated ETTR one. You thus end up with two brackets, a user selected highlight and an ETTR. But these two brackets may be too far apart for post processing.

What I am proposing is an ETTR enhancement is to give the AETTR menu one more variable, namely the number of images to evenly take between the base exposure and the ETTR one. At the moment this is zero. In other words no intermediate brackets.

The new user variable, between 0 and 9, say, will insert that number of brackets, evenly spaced between the base and the calculated AETTR.

This enhancement of AETTR will bridge the benefits of advanced bracketing and AETTR.

Finally, I'm not a programmer but would like to know how to take a module and tweak it, ie turn of to text, tweak it and recompile it. 



HDR and Dual ISO Postprocessing / ML-based High DR Workflow
« on: October 27, 2013, 05:58:00 PM »
Let me say up front that what follows is my personal workflow. I’m sharing it with fellow MLers in the spirit of learning and improving. My workflow is based on the latest nightly build for my 50D (I am waiting for the release for the 5DIII that works with the latest Canon firmware as I need the F8 focusing). I assume the reader is familiar with ML, Advanced Bracketing, Auto-ETTR and Dual-ISO.

So here is the workflow:
a.   Enable the appropriate modules, eg Auto-ETTR and Dual-ISO;

b.   Compose and focus the scene and assess the DR of the scene, either using ‘guess work’, in-camera metering (ML or Canon) or use an external meter ( I use an Sekonic L-750DR);

c.   Based on the metering decide on one of the following basic capture strategies:
i.   If the DR allows it, ie low and containable in a single image capture, use a single exposure and set metering handraulically using your photographic skills (in whatever mode you decide to use, ie Tv, Av or M). This is the non-ML-enhanced base approach;
ii.   As above, but get some help by using (double-half press or SET, ie not ‘Always-On’ or Auto-Snap) Auto-ETTR (to obtain a single image capture) and ensure maximize the quality/quantity of the image file, ie maximize the number of useful photons captured and converted, without blowing out highlights. A further refinement here is to switch on dual-ISO as well, but I prefer not to use this as part of my photographic workflow;
iii.   Use Auto-Snap or Always-On AETTR and first meter for the highlights you wish to ‘protect’ (recompose as required) and use this as the starting image for the AETTR capture. Using this approach you will get at least two images, one with good highlight capture and the other with, likely blown out highlights, but with good shadow/mid exposure (according to you’re a-ETTR setting), ie based on the AETTR algorithmics. This is a good strategy for capturing a two-image bracket set, ie as long as the scene’s DR is not too large for your camera. This two-image bracketing is fast and virtually guarantees you will never have blown out highlights that are important to you);
iv.   Switch off AETTR (and dual-ISO) and switch on advanced bracketing and select the number of brackets to cover your metering or use the auto setting, which although will mean more image captures will result in a full DR bracket set.

d.   Ingest into Lightroom (I have the Adobe Photographers set-up, ie Photoshop-CC + LR);

e.   For the single image captures I will then carry out basic LR processing as normal;

f.   For the two-bracket (auto-snap) capture I will adjust the images, eg to ensure good highlights in one and good shadows/mid-tones in the other. I will throw these two images down two post-processing paths. First I will use LR/Enfuse, and then I will use ‘Merge to 32-bit HDR’. I then have two image files to ‘play around’ with, a 16-bit one and 32-bit one;

g.   For the advanced bracketing set I will once again try several post-processing routes, eg Photomatx, HDR Efex Pro 2 or Merge to 32-bit HDR’.

h.   In all cases I will usually go into Ps-CC and finish off the image with a variety of post-processing tools.

So, in conclusion, I’m not saying the above is the THE way to go, but, for me, it works and I thank the ML team for that!



I have CamRanger running on my 5dIII.

If only it could interface to ML.

There is a lot of physics going on here. I commend these two articles to those who wish to have a greater understanding of their sensors:

And for the 5DIII owners


Modules Development / Re: Auto ETTR based on RAW histogram?
« on: April 26, 2013, 03:29:55 PM »

I suggested the above as I thought you had worked outa way to display a RAW histogram in LV, rather than 'just' in review mode.

In other words if a RAW histogram can be displayed in LV I would like to suggest a user option to display, say, the right hand highlight stop as a zoomed in histogram would be very useful

Modules Development / Re: Auto ETTR based on RAW histogram?
« on: April 26, 2013, 12:32:36 AM »
May I simply endorse the above and ask if it would be possible to factor in a user variable that allows the user to select the entire histogram or a fraction of it, from the 100% highlight end. It could be in stops or fractions of the full histogram.

Modules Development / Re: Auto ETTR based on RAW histogram?
« on: April 21, 2013, 04:27:40 PM »
I have written this to hopefully help the broader ML community better understand some of the ‘feature requests’, in this case ETTR.

Obviously it is written from an ‘IMO stance’ as ETTR and other strategies to maximize dynamic range, or captured image data fidelity, are not universally agreed upon.

First, to maximize the captured data’s fidelity for post processing I believe we need to try and accomplish several things with our exposures (other than ensure they are in focus etc): minimize noise, maximize S/N and capture the most tonal information on each sensor element (RGBG). However, trying to accomplish all these at once, for a real world scene, is near impossible.

For example, to minimize noise we should only shoot with a camera cooled to its lowest operating temperature, eg to minimize dark current noise. The longer we shoot and if we shoot on hot days this noise contribution will increase, just like entropy.

To maximize S/N we should seek to capture the maximum number of photons, and no more, ie achieve a Full Well situation. However, although we may be able to do this for a real scene, it will only be achieved in the few sensor elements in the brightest part of the scene, it a very small % of the overall statistics of the captured image. For instance the subject/focus of the scene may be in the mid tones or lower, ie not a specular highlight that is creating the Full Well situation.

I believe we all now know that DSLR cameras do not capture and process light like our eyes or film. The process is linear and thus this is why there is apparent merit in ETTR and bracketing strategies. That is trying to get the maximum tonal graduation into the capture, without ‘blowing out’ important data.

So far so good.

I think bracketing is not ‘contentious’ as we usually are on a tripod and at the base/lowest ISO, ie where we can guarantee that some of the sensor elements capturing the scene information we deem important will be at their Full Well level, albeit only a few %, unless we ‘over bracket’.

I think the issue comes when we introduce the ISO factor, ie when doing a handheld bracketing set or seeking a handheld ETTR single exposure. In both cases we may need to increase the ISO to achieve a good shutter speed, eg the slowest bracket greater than 1/FL, say OR greater than 1/50, say.

I for one take a lot of handheld 3-brackets on my 5DMkIII and have confidence that increasing the ISO will not create too many issues in post processing. However, from my reading on sensors etc, I will not increase the ISO above about 1600-3200, as this will transition me from the region where the camera noise sources dominate to where the sensor limitation dominate, ie I’m just not capturing enough photons at high ISO. This transition will vary for each camera, but the bottom line is, that if we follow an ETTR strategy, there is an upper limit (ISO) we should all be aware of.

In conclusion, I believe ML is on the right track by giving the user choices to maximize DR and S/N wrt the scene, ie extended bracketing (although with my 5DMkIII this is less importance compared to my 50D) and a RAW LV histogram (a transformational feature).

Finally, IMHO, using all the ML features without understanding the camera-system’s limitations could bring disappointment.

Feature Requests / Re: RAW overexposure warning
« on: April 20, 2013, 03:30:29 PM »
Thanks for the clarification, I should have worked that out for myself!

Feature Requests / Re: RAW overexposure warning
« on: April 20, 2013, 03:25:15 PM »
I think many ML supporter are eagerly waiting a raw histogram in LV.

One, most probably silly, thought is could ML be used to provide the user a overexposure warning in the view finder?

My thought was that ML could high jack part of the canon display, for example could ML be used to flash the exposure indicator? Or one of the other canon display figure to give an unambiguous warning to the user that the raw histogram is clipping?

As I say, most probably a silly thought.

ML already does this auto bracketing.

Feature Requests / Re: RAW overexposure warning
« on: April 16, 2013, 03:05:27 PM »

There has been a lot of posting in the CHDK community on quasi-RAW histograms, that is giving the user a higher bit depth histogram, ie beyond the JPEG 8 bits.

The result is that I can now use Shot_histogram and inspect, say, the high end. The way the scheme works is that the algorithm lays down a sampling field and creates a sampled histogram, of defined bit depth, from the image.

As I say, it is not a true RAW histogram, but it is of higher fidelity that the 'native' one.

The downside is that it takes on-camera processing time, however, for its main use this is not so important. For example, I use it in am auto bracketing script as well as an ETTR script. It also has uses in timelapse scripts, eg bramping decision making.

Bottom line, the CHDK approach is not a RAW histogram, but it certainly seems to work for my scripts.

The above is just and observation and not a request.

Feature Requests / Re: [WONTFIX] Canon 7d HDR
« on: April 04, 2013, 01:50:45 AM »
Regarding the 10-11 stop DR range above, I believe this always creates confusion.  If you take where Canon sets the black point and the max value of a 14-bit histogram, then the camera is only able to capture about 5-6 stops of tonal data, as measure in Ev from the top of the histogram.

In other words half the tonal range is held in the right Mose 'stop', and then half again in each stop until he black point is reached, hence ETTR strategies.

I think using Auto bracketing solves most problems, other than a desire to shoot fast brackets.

Of course I could be totally wrong!

Feature Requests / Re: [WONTFIX] Canon 7d HDR
« on: April 03, 2013, 08:16:44 AM »
Surely we have this in the form of auto bracketing. In other words select auto and the Ev step and ML will take as many brackets to cover the dynamic range of the scene. I have seen auto bracketing take 15 images. In other words you are not limited to a max of 9, you simply can't specify or predict the number of brackets past 9!

Modules Development / Re: DotTune AFMA
« on: February 24, 2013, 04:40:15 PM »

Once I manage to get the DotTune function up and running in my 50D I will cross check with the other AFA process I use, ie FocusTune.

I have found FocusTune to work well and it provides graphical plots that help provide confidence that you have indeed hit the sweet spot, ie it will be good to see DotTune and FocusTune align!



Modules Development / Re: DotTune AFMA
« on: February 24, 2013, 04:22:33 PM »

As I don't build myself, I will wait for the next nightly build and add to trsaunders' testing, ie I have the following Canon lenses: 50mm, 100mm macro, 70-200 F4, plus I have 10-20 Sigma and 150-500 SAigma.



Modules Development / Re: DotTune AFMA
« on: February 24, 2013, 04:08:26 PM »

You beat me to it: that is testing on the 50D.

However, although I downloaded the latest nightly build (23 Feb), for some reason I don't see the same ML start layout as you.


Any idea why this should be?

Also, no AFA function.



General Help Q&A / Re: MLU and bracketing
« on: February 15, 2013, 01:52:40 PM »
I thought you might say that. In other words ML can not control the MLU across separate shutter operations, as in LV.

A pity.

Thanks for the quick reply.

Pages: 1 ... 85 86 [87] 88 89