Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - garry23

#301
General Help Q&A / ND Bulb timer
June 03, 2015, 01:21:52 AM
I know there was an ND bulb timer in at one time, as a module I think.

I can't seem to find it now.

Does any one know where it is?

I hope it hasn't been removed.
#302
Tutorials and Creative Uses / LV viewer
May 24, 2015, 03:46:41 PM
I thought some may be interested in my latest post, as it was triggered by an ML need: namely how to access the ZmL enhanced LV in bright sunshine.

After contemplating an HDMI monitor, I elected to go with the Varavon Multifinder.

So far I'm pleased.

http://photography.grayheron.net/2015/05/gadgets-gizmos_23.html

Cheers
#303
General Development / ML code development
May 09, 2015, 03:48:31 PM
Like many I love the compile in the cloud tool, as it allows me to test tweaks on ML code, for instance I added diffraction to the DoF feedback. But here's my problem.

My coding skill is at a novice level, for example Alex improved on my coding :-)

I keep reading the ML code, but as many I'm sure have found, it is not easy to reverse engineer code, especially when the functions are spread out over the various .c and .h files. It takes a lot of time.

I have tried asking questions on the forum, but I'm afraid not got anywhere with this approach, ie people are busy and I get that.

This area of the forum is meant to be where we ask for help. So my question is simple, do new coders like me continue to ask questions here, or should we create a new area of the forum where new code developers like me can ask coding questions and seek help from a group of mentors who are prepared to give their time.

Cheers

Garry
#304
There has been a lot of hype in the last week or so on  the 'incredible' Sony touchless shutter app, that exploits the viewfinder properties to allow the user to trigger the camera by bringing their hand close to the view finder.

Of course as an ML power user I've had this feature for a while...and more :-)

I'm on a tripod.

Simply switch on motion detection under the shoot menu, I use trigger by expo change, trigger level 8, detect size large and a 1s delay.

I then switch the power users mode, ie LV ;-)

I compose and in LV use the depth of field feedback, now with diffraction correction, to set the focus.

I then ETTR via the SET.

I then simply wave my hand in front of the lens to trigger the capture in an touchless manner.

ML outshines all others once again.
#305
http://www.georgedouvos.com/douvos/Image_Sharpness_vs_Aperture_2.html

https://bitbucket.org/hudson/magic-lantern/pull-request/632/lensc-edited-to-account-for-diffraction-in/diff


Quote from: Original Post@Audionut

I would like to contribute, as I'm sure other photographers would like to have this 'refinement' on DoF. However, I feel my coding is rather too camera centric and I would like to add some user input (but haven't worked out how to do this yet), eg put up a sub menu, say in the focus menu, under DoF.

Bottom line: in time I hope to submit, but I need to do a bit more coding first.
#306
General Help Q&A / Info on LV x-y coordinates
April 13, 2015, 03:23:29 AM
I wish to personalize ML for my old eyes, ie put key info in a fixed position on the LV screen, with a black BG.

With the new 'compile in the cloud' I am confident I can do this.

I have already proved I can change fonts to FONT_LARGE from FONT_MED in the core ML code (I assume large is the largest?).

I now wish to explore further.

I wonder if some kind developer can throw some light on the LV coordinate scheme that ML uses, eg bottom left and top right X-Y coordinates.

Cheers
#307
General Help Q&A / Compiling in the Cloud
April 12, 2015, 05:19:09 PM
This is not a repeat topic, as dmilligan et al are trying to help me simply get the 'compile in the cloud' running: http://www.magiclantern.fm/forum/index.php?topic=14725.msg144847;topicseen#msg144847

I'm on Windows 8 and using Firefox browser.

I simply can not get it to run and all I'm doing is following the guidance on the first post.

Could one of the ML non-developers who are successfully using the 'compile in the cloud' approach share with me their step by step approach, so I cam eliminate user problems by me!

Many thanks

Garry
#308
In another post I suggested a tweak to the dof focus reporting and hoped some kind developer would 'magically' tweak things.

I think I was asking too much.

Although I'm not a coder, I thought I would try and make a halfway step, ie do some coding myself.

Here are my simple tweaks to lens.c, plus a global variable would need to be added as a toggle in the DoF menu, ie where you switch dof reporting on and off. I have not been able to do this as I don't know where to look.

Please don't crucify me!!!! I'm trying to learn.

{
    #ifdef CONFIG_FULLFRAME
    const uint64_t        coc = 29; // 1/1000 mm
    #else
    const uint64_t        coc = 19; // 1/1000 mm
    #endif
    const uint64_t        fd = info->focus_dist * 10; // into mm
    const uint64_t        fl = info->focal_len; // already in mm

    // If we have no aperture value then we can't compute any of this
    // Not all lenses report the focus distance
    if( fl == 0 || info->aperture == 0 )
    {
        info->dof_near        = 0;
        info->dof_far        = 0;
        info->hyperfocal    = 0;
        info->rel_abs = 1; //This is a global toggle set in DoF menu. 1 = relative reporting, 0 =
// Absolute. This toggle needs adding to the DoF menu as a user input
// with a default of 1
        return;
    }

    const uint64_t        fl2 = fl * fl;

    // The aperture is scaled by 10 and the CoC by 1000,
    // so scale the focal len, too.  This results in a mm measurement
    const uint64_t H = ((1000 * fl2) / (info->aperture  * coc)) * 10;
    info->hyperfocal = H;

    // If we do not have the focus distance, then we can not compute
    // near and far parameters
    if( fd == 0 )
    {
        info->dof_near        = 0;
        info->dof_far        = 0;
        return;
    }

    // fd is in mm, H is in mm, but the product of H * fd can
    // exceed 2^32, so we scale it back down before processing
    info->dof_near = (H * fd) / ( H + fd ); // in mm
    if( fd >= H )
        info->dof_far = 1000 * 1000; // infinity
    else
    {
        info->dof_far = (H * fd) / ( H - fd ); // in mm
    }
//Check if absolute reporting requested and adjust dofs
If(rel_abs=0)
{
   Info->dof_near = fd-dof_near
Info->dof_far = fd+dof_far
}
// Note a refinement would be to add some text or colour to indicate rel vs abs reporting
// But this is a refinement that isn't needed

}

Bottom line: once again I hope someone with the ability to compile nightlies will add this simple tweak.
#309
As developers tweak and add to the nightlies I was hoping one kind developer would tweak the LV DOF display.

First of all why?

As a landscape photographer I often undertake focus stacking. This is not of course like macro stacking, where the focus steps are linear and small. Landscape focus stacking is non-linear, and change frame to frame.

For the occasions when the ML returns focus info, eg my 24-105 F/4L, ML provides a realtime aid for the landscape focus stacker.

I focus on the nearest feature I wish to see in focus, note the depth of focus either side, and move my focus into the field until the second focus point overlaps with the first. And repeat until the last focus reports infinity.

At the moment it's a little tricky for several reasons.

First, you need to do some mental arithmetic as the focus distance is in absolute units, ie ft, and the focus distance either side is relative, ie to the focus distance. What would be great if the focus info reported the absolute distances for all three values. This could be user selectable, ie relative or absolute distance reporting.

Second, the text is really difficult to read, unless you have young eyes. It would be good if the user could select the size of the text, ie normal, double or 4x, say. BTW this would be great for other ML feedback, eg the spotmeter, which is really small and difficult to read.

Finally, I'm not sure what equations are being used, but I assume they use a pupil ratio of 1. So my final 'request' is can we use the full equation, including the pupil ratio and have that ratio and the CoC as two user variables, ie see www.toothwalker.org/optics/dofderivation.html. Other variables in the equations are known to ML, ie aperture and FL. 

As I say, I hope some kind developer could do these tweaks and make the DOF ML feature a real tool for landscape photographers.

PLEASE IGNORE SOME OF THE ABOVE: AS I NOW REALIZE THAT ML DOES REPORT ABSOLUTE DISTANCES :-)
#310
Tutorials and Creative Uses / Tilt-Shift Lens
April 05, 2015, 12:57:49 AM
In a post a while back I gave some feedback on how useful ETTR is when setting the exposure on a Tilt-Shift lens, when the in-camera exposure meter can be fooled.

I just posted some more ML enhanced tilt-shift feedback that some with a tilt-shift may find of value.

http://photography.grayheron.net/2015/04/shifting-to-tell-story.html
#311
Feature Requests / Intellegent Spotmeter
April 01, 2015, 03:11:17 AM
Forgive me if this has been requested before.

The current LV spotmeter says "Measure brightness from a small spot in the frame. " It says nothing about what small means, ie area.

As someone who uses a 1-deg spot meter a lot, I wonder if it is possible to give the spotmeter some intelligence.

By this I mean the user can tell ML, which assumes a 50mm lens is fitted, what the focal length of the actual lens is, and ML, which I assume is sampling the sensor LV feed, adjusts the spot area to the equivalent of a 1-deg spot.

#312
Feature Requests / Histogram Review Function
February 23, 2015, 05:10:02 PM
Many photographers, ie using mirrorless cameras, are benefiting from real-time EVF histograms and blinkies/zebras. That is using these two tools to help set exposure.

With ML in LV we have the similar/same ability.

However, the EVF (ignoring RAW vs JPEG histogram benefits) has a real advantage. For those with 'old' eyes and/or for the time you are shooting in bright conditions, ie the ML LV ETTR hints (text) and histogram are small and sometimes difficult to see.

What I wonder/hope is that the developers can build in an LV feature, where the user can customize the size (and position?) of ETTR feedback, or toggle it on and off or between different sizes; ie for many/all photographers, once we have composed/focused the scene, and our sensor is looking at the right place, the LV image of that scene is irreverent. What is important is information on the exposure...as long as you can see it.

Thus my hope is that when in ML-LV, we can have an ML feature that gives us much larger text about such things as ETTR, ie hint etc, and a much-much larger view of the RAW hstogram.

As usual I freely end with all the caveats: I love ML, I'm not a coder, and I hope some coder can relate to what I'm looking for.

#313
General Help Q&A / Macro ETTR strangeness
February 22, 2015, 08:17:57 PM
I have been trying to work out what is going on with my 5DIII and my F2.8L 100mm Macro lens.

I have Exp Sim on, Global Draw on All, and am using ETTR to get a base exposure.

At F2.8 the exposure looks OK.

When I try and ETTR at F/16, the ETTR does not 'correct' for the smaller aperture, ie the shutter stays at the F/2.8 value.

This is confusing me, as I thought ETTR used the actual sensor data, ie irrespective of what the lens optics are doing, hence ETTR works with manual aperture lenses.

Can anyone educate me?

Cheers
#315
General Help Q&A / Back button issue
September 04, 2014, 08:48:51 AM
I buried a question in another post the other week, and got no reply.

Forgive me trying one more time.

It seems that if you use shutter half double press option to trigger AETTR, it also triggers off a double press of the back button focus. At least on my 50D and 5DIII.

Does anyone know if this can be switched off, ie just trigger off the shutter and not the back button.

Is this just a 'feature' of the code that could be tweaked to correct this effect?

Bottom line: I would have thought everyone using back button focus would wish this to be kept just for that, ie not triggering AETTR 

Cheers

Garry
#316
Feature Requests / Long Exposure ETTR setting
July 29, 2014, 12:45:57 AM
As someone who uses high NDs, eg 4, 10 and thus 14, for LE photography, the challenge is 'guessing' the exposure, especially at the 10 or 14 level.
I have been experimenting with A-ETTR and believe I am close to this being a LE photographers' savior. But the current coding of ETTR does not always find a solution.

My first 'non-optimal' ML-enhanced workflow goes like this:
•   Compose & focus without the filters on;
•   Place the filters on the camera;
•   Set the ISO;
•   Set the shutter speed to anything between 1 and 30 seconds, according the scene;
•   Set an aperture, eg F/22 or F/16 or something;
•   Press ETTR and get ML to adjust the ISO (normally ETTR always prefers low ISO solutions), ie don't touch the exposure time or aperture, 'just' seek a solution for ISO;
•   Adjust the shutter time and F-number if ETTR fails to get a solution;
•   Take note of the ISO and adjust it down to 100 and change the shutter speed and/or aperture accordingly.

My ideal ML-enhanced LE workflow is as follows (but it needs coding, which I can't do):
•   Compose & focus without the filters on;
•   Set the aperture for the desired 'look';
•   Set the ISO;
•   Place the filters on the camera;
•   Set 'LE solution' in the A-ETTR menu, ie a new setting yet to be coded;
•   Press ETTR to get an LE solution, ie the required LE shutter time (or a pretty good guess) at the F number and ISO you set, irrespective of the time or the ND filters you have put on the camera (ideally this time would be passed to the ML bulb timer, ie a perfect world).

Are there any other LE photographers out there looking for an ML 'LE ND filter' helper? Do the coders think this feature worthy of including in the 'things to do' list?

#317
Here is my first attempt at using AETTR to cover the Holy Grail sunset period. The video cover 50 mins and was taken with my 5DIII using shutter action, ie not Silent DNG.

Post processing was done in LRTimelapse for the static version and Panolapse for the zoom/pan version.



#318
Feature Requests / Linking AETTR to intervalomter
July 08, 2014, 05:26:16 AM
Wouldn't it be great if the ML intervalometer had a user variable to call AETTR every nth frame.

Joke!!!!!!!! Or not!

Please don't shoot a humble photographer trying to refine his craft.
#319
General Help Q&A / Can someone educate me?
June 24, 2014, 05:44:52 PM
This question is by far a background question, but if some kind sole could point me at a read I would be grateful.

In still mode the lens focuses on to the full sensor surface.

In video mode it must do the same thing.

The question is: how is the video recording handing the differences in pixels recorded, eg does video 'simply' bin pixels? And if so is there an advantage in S/N, ie more captured photon at each site.

As I say, a background question.
#320
General Help Q&A / ETTR when HDMI connected
June 22, 2014, 11:51:20 PM
I'm giving a talk at my local club on ML and will be projecting from my 5DII via an HDMI cable.

Just tried it out, and ignoring the few seconds to initially sync, I have noticed a strange thing.

With the HDMI cable out ETTR works as it should. With the cable in it always gives to 1/8000 and says exposure limits reached.

Has anyone else observed this behavior? It is not an issue for me, as I don't shoot with an HDMI cable attached.
#321
Share Your Videos / 5DIII Silent DNG Capture
June 03, 2014, 04:39:10 AM
I thought some would be interested in seeing what shutterless capture can do.

Of course this is a very simple (constant exposure) capture.

But I like it that I use zero shutter counts!

#322
Duplicate Questions / Silent DNG
May 30, 2014, 04:15:43 PM
At the risk of one of the ML gurus jumping on me, I offer the following as my humble attempt (as an ML evangelist, but not an ML coder) to help nudge ML along.

My focus is to ensure the ML user community have access to the evolving silent DNG timelapse. At the moment the ML community can capture silent DNG timelapses, but, as we know these DNGs have missing EXIF data.

I am fully aware that the 'missing' (exposure info) EXIF will get fixed in time and I am not pushing this.

What I have thought about is palliatives and I can imagine two possibilities (but both need some 'recoding').

Palliative one is to request that the deficker.mo gets 'tweaked' to ensure it handles silent DNGs as well as .cr2s.

Palliative two is to asked if the timelapse function can be 'tweaked' with an option to dump a .XMP sidecar file when taking a timelapse sequence, and hope this sidecar handles the silent DNG.

I understand that both of the above may require additional coding if we are looking for perfect EXIF data. However, in this case 'all' we are after is the exposure time, ISO and aperture. In other words, if the sidecar has this associated with the silent DNG (000001.DNG & 000001.XMP) then I can simply write a .BAT to extract the exposure EXIF and use this.

Once again, I hope my 'request' doesn't upset the ML gurus (too much). I truly have written this request knowing that fixing the silent DNG EXIF is not top priority, could take a lot of effort (?), and thus I have been thinking about 'easier' palliatives to get silent timpelapses going (which I believe is a real ML win).
#323
Duplicate Questions / Silent picture vs RAW movie
May 30, 2014, 01:09:39 AM
I am a little confused as clearly RAW 14bit video works on a 5DIII as does silent picture DNG.

I have the latest download installed.

I have a 160Mbs card.

Could someone tell me if I set FPS override to, say, 10, why does burst silent mode always stop at 28 frames, ie not continuously write images, like for RAW movie.

As I say, I'm confused and I admit ignorant of the technicalities.
#324
General Help Q&A / Timelapse shutter drag
May 27, 2014, 12:09:27 AM
I am using silent picture mode for timelapse and it is great. I ingest into LR and use LRTimelapse.

One thing I'm not sure about is the exposure when pulling a silent picture from the LV.

Assuming I set a base exposure to ISO 100, f/10 at 1/200 is this the exposure that the silent picture dng is captured at? Or is silent picture not related to the mechanical shutter?

I correct the missing dng EXIF in LR using LensTagger. This is OK for a fixed timelapse, but if I shoot a-ETTR holy grail then the EXIF data could be all over the place. So I still need to work out how to correct the EXIF in this case.

Finally, is there a way to tell the camera that I don't want to shoot faster than x, ie to ensure the 'shutter' is dragged. Of course this is irrelevant if a silent picture doesn't have an equivalent shutter time. There are ways to tell the camera not to shoot slower than y.
#325
As a stills photographer I have come late to the ML 'video' side.

However, I am becoming a convert :-)

Here are a couple of early thoughts:
http://photography.grayheron.net/2014/05/moving-on-literally.html
#326
I hope some kind person can tell me where I'm going wrong on my 5DIII.

I have the latest download.

I have enabled the silent module and it works OK in both half push and burst, ie I get .dng, which I can open in LR.

When I try to set up a simple timelapse, switch to LV and try to take a silent timelapse, I 'just' get a singe silent dng, until I press half shutter again, ie the timelapse doesn't seem to initiate and trigger the silent capture.

Can someone spot what I'm doing wrong?

Cheers
#327
Does any one know what time between frames is hard coded into dual-ISO alternate frame feature.

In other words, when does it reset, if at all.

I would welcome a user time setting, ie if next frame taken within x seconds it will be dual, else it will be normal.

And a switch to have the first frame normal and the second dual. This way, with the time window, you have more options.
#328
I know some of you ML videographers need extended battery life, also us still photographers who use LV more with ML also need more battery life.

Could this be your answers?

http://fstoppers.com/diy-dslr-external-battery-pack-get-up-to-9-hours-of-shooting
#329
General Help Q&A / CeroNoice Bayer Interference?
April 21, 2014, 11:17:44 PM
As I process my images with CeroNoice, I notice that some appear to have a 'matrix' patterning over the entire image.

If I zoom in, to where I see pixels, it seems the darker matrix/grid is every fourth pixel.

Has anyone else seen this?
#330
General Help Q&A / Exposure <= 0 warning
April 20, 2014, 03:09:15 PM
Despite CeroNoice and validate_dng appearing to work perfectly, ie I get a32-bit tiff, I get a warning ever time that my exposure <=0 and warnings about padding.

Can someone who knows what is going on throw some light on this?

Cheers
#331
I think there are three basic types of people who read the posting on this ML forum: the developers & coders who know what they're doing ; those who have no clue what going on; and those, like myself, who know enough but want to know more!

I have been excited about the work Alex and others have been doing with CeroNoice. I think it is an exciting tool, but, as usual with the bleeding edge stuff on this forum, it is not always easy for those like myself to follow. So 'idiots' like me end up asking 'dumb' questions and looking like fools when the experts come back at us with the 'obvious'.

Until this week, I had never written a Windows .bat script, but I sensed this is what I needed for my workflow with CeroNoice and Lightroom. Thus what follows is how I have done it and got CeroNoice running for me. To be clear, 'all' I want to do is use CeroNoice to create a 32-bit (tif) negative for me, starting with a set of bracketed .cr2 files. That is, I'm not interested in .dngs and not using LR. I want a LR workflow.

So this is my LR-based workflow: I have appended the .bat file I wrote to the end of this post (which I'm sure will have the experts cringing, ie I'm sure it could be more efficient: but it works for me).

First, set up a Folder where you will do your 'out of LR' processing, eg a folder called CeroNoice, or anything you like.

In that folder place the following executables, which you will find references to on this site: CeroNoice.exe, dcraw.exe, dng_validate.exe and exiftool.exe. Note: I'm not even sure you need all these, but there is no harm placing them in the folder. Also note (read the stuff on this site, you must have the right/latest versions, ie each piece of software needs to be compatible with the others).

Open up a .txt file, ie in NotePad, and past in the .bat text below and save this file as a .bat file, ie with a .bat extension. You can call it what you like, but make sure it is in your processing folder.

The Folder is now set up.

Now go to Lightroom (you don't have to use LR to get your .cr2 files into the processing folder, but this is how I do it.) Set up an export preset, with the following attributes:
1.   Identify the processing folder as the export location
2.   Don't identify any sub folder
3.   Select file naming as Custom Name – Sequence
4.   Enter "in", without quotes or spaces, as the custom text (key point)
5.   Select Original as the File setting format, ie the .bat file works with .cr2 files. You will need to change the .bat if you use .dngs.
6.   Save this as an export preset, eg called, say, CeroNoice

You are now ready to process .cr2 brackets.

Go to the brackets you are interested in processing, say, x of them. Select the darkest one (key step!) and holding the Ctl button down left click on the other brackets you wish to select, in a darker to lighter manner.

You will now have x images selected. Now right click on the brackets you have selected in order, and choose export...

Select the CeroNoice preset and press export. If you are doing multiple brackets, you only need select export with previous next time (unless you have used another preset in-between), BUT, only process one bracket set at a time!

Now go to the folder you set up, where you should see x .cr2 images and x .xmp files.

Now double click the .bat file.

You are now presented with one user input, ie how many brackets. Now is not the time to tell lies!

Enter the number of brackets and press enter.

The .bat file will do its stuff and create a 32-bit Tiff called out.tif. All input files will be deleted.

You can now play with the 32-bit Tiff, say, in PS-CC or ACR. But, as I said, this post is for the LR users.

Go back to LR and the library module, select import... , select the folder where the .tif is, there should only be one image there, ensure move is selected, select your 'To' folder, ie where you wish to move the 32-bit file to, and press Import.

The file will move from the processing folder, so this is ready for another bracket set, and now it will be in LR, ready for 32-bit processing!

The file will be large!

The file will be green!

However, all the LR processing tools work at the 32-bit level, including the WB.

That's it: I hope some have found this post useful?

Here is the .bat text. As I said it is far from perfect, but is functional (although I do get warnings, it appears to not matter! Maybe Alex or someone else can throw some light on these warnings?

@echo off

title CeroNoice Processor

set brackets=1
set /p brackets=How many brackets to process?
set /a brackets=%brackets%+1

set counter=1

set cero=ceronoice.exe

:begin

set  cero=%cero% in-%counter%.cr2

set /a counter=%counter%+1

if %counter% EQU %brackets% (
goto continue
) else (
goto begin
)

:continue

%cero%
dng_validate.exe -3 out out.dng

rem ensure all the input and working files are deleted
del *.xmp
del *.cr2
del *.dng

exit


#332
General Help Q&A / Motion Detect
April 13, 2014, 12:55:09 AM
I have been trying to get motion detection running on my 5DIII. I have never used this feature before.

I have set the exposure as normal and in the MD menu used difference, medium and tried different trigger levels.

When I come out of the ML menu the MD automatically goes to LV. So far so good.

The problem is the outer and inner boxes of the two boxes, which I can move around ok, are black. And MD doesn't trigger, eg when I move my hand in front of the lens.

I have tried exp sim on and off. The detection box remains black.

Can someone throw a little help my way?

Cheers
#333
General Help Q&A / Dual ISO
April 11, 2014, 03:58:42 AM
I'm running bleeding edge on a 5DIII, all has been fine until this evening when I noted a strange thing.

I'm running A-ETTR and dual-ISO, but when I enter 800 or 1600 in the dual setting and use ETTR, dual sets to 100/100.

I have never seen this before.

Has any nod any pointers to what, if anything, I'm doing wrong.
#334
Feature Requests / LCD and Camranger
March 26, 2014, 12:09:00 AM
First an observation.

When using CamRanger only the Canon LCD is visible in CamRanger, ie the ML overlays are not.

So the request is, can the Canon LCD output that CamRanger taps into be megred with the ML overlay feed, before it is fed out of the camera.

This way any piece of hardware or software that took the Canon LCD feed would also see the ML data.
#335
Feature Requests / Focus cue
March 25, 2014, 10:57:14 PM
In a previous post I had suggested a few enhancements to Magic Zoom, ie thicker green bar or a green dot and/or some audio feedback.

This post is not a repeat of that request!

Whilst shooting today I noted that when Magic Zoom 'found' focus I could hear an audible click from the camera. I was using MZ with a manual lens (24mm TSE-E II) so it was not the lens making the sound.

My observation is, that if you are in a quite area, and you have difficulty with 'seeing' the MZ focus confirm (I use a hoodman to see mine on the LCD). listen out for the 'magic click': it may safe your day!

BTW I have no idea where the click is coming from: does anyone?
#336
Feature Requests / Magic Focus
March 23, 2014, 03:25:48 PM
In a previous post I buried a request related to Magic Zoom, hence I'm sure if was lost.

In Magic Zoom, which is a a critical tool for photographers like me with manual lenses, ie clearer focus confirmation than focus peaking, especially in bright sun, we have three ways to show focus.

The one I believe needs 'enhancing' is the green bars method.

I challenge anyone to 'see' the green bars, which I assume are top and bottom of the focus frame.

May I request that the bars be made a lot 'thicker' or that a big green dot be made to appear in the MZ window when focus is achieved.

#337
I think most people who post here are well aware that an ML-based workflow greatly helps the photographer 'get the best' data for post-processing.

Like many these days, especially using a FF DLSR (I have a 5DMkIII as well as an IR-converted 50D), I find myself drawn to using ETTR, and when it is useful Dual-ISO.

One particular area where an ML-based workflow really helps is when you are using a Tilt+Shift lens as the normal in-camera Canon metering is only effective for a zero tilt and shift. As soon as you tilt and/or shift the camera's metering can not be relied upon.

Magic Lantern to the rescue. How? Well this is my typical TS-E 24mm F3.5L workflow:
- Put the camera in manual – note the TS-E has no auto-focus, obviously;
- Visualize the scene and, if require, explore/estimate the required shift range;
- Estimate the hinge height you need, ie for a tilt in degrees I use 9/2J, where J is the required hinge height in ft;
- Set the base focus, eg at slightly beyond the hyperfocal distance is not an unreasonable starting point, and as the TS-E is a prime, this number never changes for a given aperture, ie easy to remember!;
- Check the focus along the plane of sharp focus and adjust the tilt and focus accordingly, change focus rotates the plane of sharp focus around the hinge and adjusting aperture changes the focus wedge angle;
- Shift as required;
- Now the real Magic occurs. Simply invoke ML's Auto-ETTR (I use the SET method) and the exposure is adjusted to ensure your data capture is maximized, ie to the right according to your settings.

In other words, using ML, you have one less thing to worry about, ie exposure!

Bottom line: If you have a Tilt-Shift lens, then I strongly suggest you try out the ML ETTR exposure setting in your workflow.
#338
Feature Requests / Lens informed spotmeter
March 16, 2014, 10:26:30 PM
For those of us that use an external spotmeter, rather than the Canon spot metering value, we do so because of the 1deg spot size.

I wonder if it is possible to replicate a 1 deg spot meter in ML in a semi intelligent manner.

First the ML spot would try and auto detect the focal length of the lens and, knowing the sensor size, work out the pixels to use for a 1 deg spot.

If the FL is not detected ML would flag this up, use a default FL and give the user the ability to manually input the FL.

Such a spot function would really help those that wish to manually evaluate the scene.

Ideally the ML spot meter would provide exposure values, as with an external spot meter.
#339
General Chat / Fibonacci Exposure Bracketing
March 12, 2014, 02:21:36 AM
Some may have seen this, but in case you haven't, you may interested in this article. It might stimulate the ML gurus ;-o)

http://www1.cs.columbia.edu/CAVE/publications/pdfs/Gupta_ICCV13b.pdf
#340
Feature Requests / Focus Stacking
December 11, 2013, 01:50:06 AM
For those that desire full tack sharp depth of field, the only way is to carry out focus stacking. For example, even shooting at the HFD will not give full tack sharpness, really acceptable out of focusness. As an example, on my 24-105mm at 24mm, if I choose a blur spot, ie CoC + diffraction, of 20 microns, the HFD is about 14ft at around  F10. However, the defined sharpness zone is 'only' down to about 6ft.

I know some are saying that is fantastic, but if you want to cover down to a couple of feet, say, you would need to focus stack. For example, at 24mm on my 5DIII,  I would need to take three shoots at 3, 5 and 15ft.

All my calculations are made using TrueDOF-Pro, Optimum-CSP and FocusStacker.

Ok the 'request' is this. Can the developers give us a way to specify a set of focus distances, ie three in the example above? Ideally the user would be able to specify a few FLs and the related focus distances, say up to 9, so this can be easily selected from a menu.

I hope the above is clear and that it hasn't been requested before.

I appreciate this would only work for some lenses.

If enabled it would be like focus bracketing, but at specified distances.
#341
Feature Requests / Enhanced ETTR request
October 30, 2013, 03:03:23 AM
Currently we have three great ML ways to guarantee we take a set of brackets that cover a scene's  required dynamic range capture, ie the important highlights and the important shadows.

First we can meter and calculate the required bracket set then uses ML bracketing.

Second , we can use auto bracketing in ML, but 'loose' control of the number of brackets.

Thirdly, we can use AETTR, meter for the highlights and adjust from the 18%, zone V to, say, zone VII, use this as the base exposure and invoke AETTR to capture the calculated ETTR one. You thus end up with two brackets, a user selected highlight and an ETTR. But these two brackets may be too far apart for post processing.

What I am proposing is an ETTR enhancement is to give the AETTR menu one more variable, namely the number of images to evenly take between the base exposure and the ETTR one. At the moment this is zero. In other words no intermediate brackets.

The new user variable, between 0 and 9, say, will insert that number of brackets, evenly spaced between the base and the calculated AETTR.

This enhancement of AETTR will bridge the benefits of advanced bracketing and AETTR.

Finally, I'm not a programmer but would like to know how to take a module and tweak it, ie turn of to text, tweak it and recompile it. 

Cheers

Garry
#342
Let me say up front that what follows is my personal workflow. I'm sharing it with fellow MLers in the spirit of learning and improving. My workflow is based on the latest nightly build for my 50D (I am waiting for the release for the 5DIII that works with the latest Canon firmware as I need the F8 focusing). I assume the reader is familiar with ML, Advanced Bracketing, Auto-ETTR and Dual-ISO.

So here is the workflow:
a.   Enable the appropriate modules, eg Auto-ETTR and Dual-ISO;

b.   Compose and focus the scene and assess the DR of the scene, either using 'guess work', in-camera metering (ML or Canon) or use an external meter ( I use an Sekonic L-750DR);

c.   Based on the metering decide on one of the following basic capture strategies:
i.   If the DR allows it, ie low and containable in a single image capture, use a single exposure and set metering handraulically using your photographic skills (in whatever mode you decide to use, ie Tv, Av or M). This is the non-ML-enhanced base approach;
ii.   As above, but get some help by using (double-half press or SET, ie not 'Always-On' or Auto-Snap) Auto-ETTR (to obtain a single image capture) and ensure maximize the quality/quantity of the image file, ie maximize the number of useful photons captured and converted, without blowing out highlights. A further refinement here is to switch on dual-ISO as well, but I prefer not to use this as part of my photographic workflow;
iii.   Use Auto-Snap or Always-On AETTR and first meter for the highlights you wish to 'protect' (recompose as required) and use this as the starting image for the AETTR capture. Using this approach you will get at least two images, one with good highlight capture and the other with, likely blown out highlights, but with good shadow/mid exposure (according to you're a-ETTR setting), ie based on the AETTR algorithmics. This is a good strategy for capturing a two-image bracket set, ie as long as the scene's DR is not too large for your camera. This two-image bracketing is fast and virtually guarantees you will never have blown out highlights that are important to you);
iv.   Switch off AETTR (and dual-ISO) and switch on advanced bracketing and select the number of brackets to cover your metering or use the auto setting, which although will mean more image captures will result in a full DR bracket set.

d.   Ingest into Lightroom (I have the Adobe Photographers set-up, ie Photoshop-CC + LR);

e.   For the single image captures I will then carry out basic LR processing as normal;

f.   For the two-bracket (auto-snap) capture I will adjust the images, eg to ensure good highlights in one and good shadows/mid-tones in the other. I will throw these two images down two post-processing paths. First I will use LR/Enfuse, and then I will use 'Merge to 32-bit HDR'. I then have two image files to 'play around' with, a 16-bit one and 32-bit one;

g.   For the advanced bracketing set I will once again try several post-processing routes, eg Photomatx, HDR Efex Pro 2 or Merge to 32-bit HDR'.

h.   In all cases I will usually go into Ps-CC and finish off the image with a variety of post-processing tools.

So, in conclusion, I'm not saying the above is the THE way to go, but, for me, it works and I thank the ML team for that!

Cheers

Garry
#343
General Help Q&A / MLU and bracketing
February 15, 2013, 01:27:32 PM
When I take a sequence of brackets, say using auto detect, and have MLU engaged, my 50D seems to engage and disengage MLU every shot.

Whereas if I put the camera in LV and use bracketing, the mirror remains up throughout the sequence, is all I hear is the shutter operating.

Is there a way to get the mirror to go up and remain up throughout a bracketing sequence when  NOT in LV ?
#344
Feature Requests / Interface
December 14, 2012, 02:25:24 PM
A simple idea, that I recognize may not be possible. But here goes. Would it be possible to add an iPod, iPhone or android to ML as an external input/output interface for ML.

That is you can use your iPhone, say, as the way to change ML settings, ie more readable than the LCD screen.

Cheers

Garry
#345
Share Your Photos / Auto bracketing
December 01, 2012, 04:21:16 PM
I now virtually only use the auto bracketing setting for my HDR captures. Of course I use a Tripod and I am not too critical to motion. If I need to capture a quicker sequence I will revert to the in camera AEB and high speed, and accept I am imited to 3 shots (50D)..

All these images were taken with the 50D in ML auto HDR mode:

http://grayheron.smugmug.com/Landscapes/Kasha-Katuwe-Tent-Rocks/26654412_hTH9fn#!i=2229550236&k=nrg77xR

You can't beat ML!

Cheers

Garry
#346
Feature Requests / GPS
November 19, 2012, 04:37:26 PM
Forgive if this has been requested before: I looked and could not see any reference.

With the release of the Canon GPS adaptor, I wonder if Magic Lantern coud be used to extract the relevant EXIF info for those 'older' cameras that can not access the GPS info directly.

Cheers

Garry
#347
General Help Q&A / MLU and bracketing
October 31, 2012, 06:16:36 PM
I have been using the latest nightly build on my 50D to try out the new aperture bracketing: and I like it.

This post is about a refinement that I think is missing in ML, but could make the 'bracketeers' life a lot easier (unless I'm missing something).

When bracketing, on a tripod, the usual 'best practice' is mirror lockup, however, when I try this through the normal ML menu, with the canon MLU on or off, the MLU function seems to only work for each image. In other words the MLU is operating individually for every bracket.

The way round this is to put the camera in LV and then take the ML-enabled brackets. BTW I mostly use the auto feature now as I don't need to 'worry' about the EV range.

With the camera in LV mode, ML only operates the shutter, ie the mirror does not move, ie remaining up until LV is turned off.

So, the thought is: would it be possible to put in a feature that enables LV mode before ML takes the requested series of brackets and then takes the camera out of LV mode at the end of the sequence capture. In other words the bracketeer gets the possibility of capturing the most vibration free images.

Cheers

Garry
#348
General Help Q&A / 50D users: how are you getting on?
August 27, 2012, 11:35:35 PM
 I hope it is ok to ask how are 50D users getting on with 2.3, especially bramping?

Cheers

Garry
#349
General Help Q&A / Time lapse setting up
August 16, 2012, 04:41:45 PM
Tried my first bulb ramping yesterday with some success. However I have a couple of questions I hope the experts can help me with.

In post I noticed that, in addition to a few iOS jumps, as expected, parts of the exposure curve show very erratic changed in exposure, ie jumps between frames. Is this an ml algorithm feature or my settings?

Also when I did the calibration step I got a nice s curve but I had difficulty moving the 'cursor' up and down the curve. I have a 50d and could only use the small wheel and then it only moved over part of the curve.  Once again does anyone have any ideas if this 'just' reflects ml or is it me!

Cheers

Garry
#350
General Help Q&A / MagicZoom
July 29, 2012, 02:56:12 AM
Using rc2.3 with great success on my 50D but I can not find how to enable or show any focus help, eg green bars or spilt screen.

Could someone point me in te right direction?

Cheers

Garry