Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - PhotoCat

Pages: [1] 2 3
1
Thanks dfort for your help!  :)

2
Thanks dmilligan for the pointers! The process is more clear to me now and I will give that a try!  :)

3
Dear all,

With encouragement from dmilligan and youtube tutorial from Mathew Kelly, I have successfully built my first custom autoexec.bin
file with some spotmeter mods in the zebra.c file.  This new feature is working in my 5D2 now :)

Now I have got my Virtualbox 4.3.28 mostly working except copy & paste from host (win7 home premium 64) to VM.
Running gcc4.8.3 here based on the ML code I copied from bitbucket on June 29, 2015.

What would be the next logical step for me?   Does any one have a road map how to venture into ML development?

What is the official procedure to submit and merge my mods to the nightly?  What if someone else is also
working on the zerbra.c file? Is TortoiseHG an answer?  I had no clue what a pull request or commit was until
I looked this up:

https://help.github.com/articles/github-glossary/

But even then I have no clue about the overall process.

Any pointers in the overall process will be highly appreciated! Thanks!

4
Yay! I did it  :)   Thanks for all your encouragements! Got the ML codes to compile and loaded the autoexec.bin into my camera... no smoke...  8)

Got Virtualbox 4.3.28 and installed new toolchain 4.8.3 as per A1ex's recommendation.

https://launchpad.net/gcc-arm-embedded/+download

gcc-arm-none-eabi-4_8-2013q4-20131204-linux.tar.bz2 (md5)   Linux installation tarball

(not tried 4.8.4 yet)

Mathew's tutorial was great:

But somehow probably due to the newer Virtualbox version, I was unable to get the
copy-and-paste from host to VM function to work. Trying to install this function as per the video will crash Virtualbox.
So I skipped this part of the video but everything else seemed to work ok.

BTW, sorry to hijack this thread!  Now what??  Should I open a new thread or post in an existing thread?
How do I get help from now on as I am ready to do some very very minor mods to the codes?
Also I need to know the procedure to submit my new codes (if any) to the ML team... absolutely no clues...
Any pointers would be appreciated! Thanks!

5
thks josepvm for the good info and I am getting more confident to give it a 2nd try...    :)

6
Thanks Licaon_Kter for the good info!  I think last time I only used gcc!
Wow, this gcc-arm-embedded one sound wonderful!  Tks!!   :)

7
Kindly thanks dmilligan for your pointers and encouragement once again! OK I will try to setup the VM environment agn on my win7 machine...
and compile...
Hopefully I will have the courage to load the binaries into my cam!

btw, I will follow this video this time:

http://www.magiclantern.fm/forum/index.php?topic=6425.msg55525#msg55525

8
Thank you for your info dmilligan but for my beginner level, I am really at a lost here.
Perhaps I should just go back to the good old way of compiling ML on my computer with VM,
which I was successful at one time.

I think what I really need to know is how I could tell if the compiled binary file is working and not corrupted,
due to any compilation mistakes I may have.   Any tools available to test ML binary file without
risking our own camera?   Thanks a lot!

9
now I know what jipo is taking about:

 "Installing Software and Box access is only available if you have an active subscription."

I guess "Installing software" and "terminal" functions are not free any more...

I signed in via bitbucket... Can that be a problem?  Tks!

10
Tks Garry23 once again for the pointer!
Script writing as in DOS script or AWK/Perl script? I have played with them a little bit but far from being proficient!
For now, I don't dream of coding ML in C. Just modifying some constants and get a working binary in my camera
will excite me very much LOL!
I also heard someone booted Linux on a Canon DSLR...   Not sure how that will change
ML development work flow in the future...
Anyway thanks for your help once again and I will see if I can get this cloud compile to work first :)

11
Wow thanks garry23!
I will try it then...  so I don't need to setup the Virtual Machine on my PC any more?

My biggest road block was that I got ML to compiled and got the ML binaries but I
had no confidence to install it in my camera! So I was stuck sadly...   
Not knowing how the revision control software works (pull/push request??) is another problem...

I love to move forward if someone can help me check my binaries :)   
I know some C but very very rusty.

Thanks for all the encouragements garry23 & I really appreciate it!

12
Interesting discussion! I have successfully compiled ML once
following this thread:

http://www.magiclantern.fm/forum/index.php?topic=6783.0

But that was how far I went... sadly...

Just wondering what is the advantage of cloud compiling versus the "old" way?
Sorry I am out to lunch...   pls enlighten me so that I can choose a better path
in learning how to tweak some ML codes. Tks!

13
Hi Mathew,

Thanks for the video! This is amazing work!
Since this is an old thread, I am just wondering if this video is still relevant
as far as compiling ML for 5D2 is concerned?  I love to do some minor tweaks too :)

I actually did follow dlrpgmsvc's written instructions some time ago and got it all to compile!
Tks dlrpgmsvc for the great work too!
The only problem was that I wasn't confident if the resulting binaries were ok...   the size looked quite different
from the nightly download so I was stuck there... and was afraid to test the binaries in my camera. 

Is there any way one can check the binaries for sanity so that it won't brick the expensive camera?   Thanks!


14
"There are several ways to uninstall Magic Lantern on the 5Dmark 2, but none of them worked for me. None of the solution on Youtube worked. The green text did not show up, and it was not possible to use the menu on the camera to update or format the card so ML could disappear. The solution was to install ML again to the CF card: I created a new folder on the PC. Here I uploaded all the files from the ML -> "Download Magic Lantern -> Stable Release: v2.3" to the new folder. So I transferred the ML files to CF who stood in the PC. When the PC will ask if any of the files on the CF card Schall be replaced? (since they are already there) Answer yes to all those questions. Insert the CF back in the camera and turn it on. Then the green text will up and ML can be uninstall, as shown in many yotube uninstall guides."

This is so true. I am on ML nightly build on a 5D2 and when I click on the Canon firmware update button, nothing happens.
So I just have to format a new card and load ML v2.3 onto it and then click the Canon firmware update button. This time, I was
able to see the ML screens and reset the boot flag.  Thank you for the tip Geir!

15
Thinking about it more, live view WB really doesn't work when flash/gelled flash is used.

So RGB % spot metering on skin area becomes vy important when it comes to WBing  skin tone when flash is used.

In such a case, ideally, one will have to take a shot and then look at it in playback and tweak the 2 dials (B-A) (G-M) to get
a good skin tone % RGB number on the face.  Alternatively, one will take a shot with the grey card with flash, and WB on grey card.
This should bring the image closer to the final result.
With the image on playback, one can tweak the 2 dials to come up with the most pleasing skin tone..

Either way will take more effort to implement in ML I guess.

For now, it would be very useful to just hv the RGB % spot meter and skin tone can be tweaked  the old fashion way.

Thanks!

17
Wow, I have discovered a gem in ML in setting WB & WB-shift in liveview!  It is there now!
This is vy close to what I want :)

for 5D2
1) go into liveview with ML spot meter activated
2) press the WB button on top of camera
3) now use the thumb dial to select k
4) point the centre of the LCD to a grey object, (u may use x5 or x10 zoom to select
a small grey object!!)
5) press set button
6) It is automatically setting K & WB-shift based on what is inside the box in the centre!!
7) u can double-check the WB setting by reading the RGB spot meter to see if R=G=B (though not in % unit)

Wow! Kudos to whoever implemented this hidden feature in ML!!  It is there today and my ML version is nightly build end of March 2014 or so.
So this is vy useful I can see, if u don't have a grey card handy or u can't go up the stage to set a custom WB!
With this feature already in ML and it is working with the spot meter too! 

So all I need is the ability to manually fine-tune the WB-shift in liveview using the 2 dials, the same way as adjusting K in liveview.
Of course a % RGB readout would be essential in checking the skin tone.


Would this be possible to do in ML?   Thank you!

18
"I would argue that tweaking WB in camera is much more time consuming than doing it in post."

It may be so using a bare canon SLR but definitely not so if ML provides the tool to do it. :)
I agree if things are happening randomly & u need to capture whatever pops up in your face,
u just shoot and forget about setting WB. However, for events and model shoots, u can afford
about 30sec's  time to set correct WB and forget about that half an hours WB tweaking in post. I think that
is a good investment.  Well, if u count the LR pic loading/exporting time and 1:1 rendering time, it saves much more time
than u think.

For my shoots, I usually can post my pics the same day as the shoot, because my jpgs are correct directly
out of the camera and I don't need to go into a raw converter for 95% of my shots.

Hv u tried WB shift bracketing in camera? U take 1 pic and only 1 pic and the camera would give u
3 jpgs with different WB to choose from.  At the same time, I know ML can display "tweaked exposure" during LCD
playback using the thumb dial.

By combining the above ideas,
what would be handy for ML to do is that, during playback mode,
ML would give us 2 dials to tweak WB exactly the same way as the 2D WB-Shift. At the same time the RGB spot meter (in %)
would get updated upon the 2 dials. (one dial for Blue-Amber shift and another for Green-Magenta shift).

This way, skin tone WB can be set quickly:  (u know all subsequent shots will hv good skin tone)

1) Take one pic of the person under that particular lighting with the best guestimate of WB/K setting and exposure.
2) Playback the portrait and activate ML Live 2D WB-shift.  (I think K can be adjusted in liveview too, not sure about WB-shift)
3) Point the RGB % spot meter on the face
4) Rotate thumb dial and main dial to weak WB until u see a pleasing skin tone or until the RGB % spot meter gives u a good value.
5) Press a button to register that 2D-WB-shift into the camera.
6) Done!

Note it is not always possible to use a grey card, nor would grey card always give u the most pleasing skin tone.










19
RGB % spot meter can be used to obtain a rough start on WB/WB-shift  for a pleasing skin tone in camera!

The current RGB meter gives a non-intuitive HEX number e.g. CDBFB1
It is much more intuitive to have the % value displayed like Lightroom: e.g. RGB: 75% 65% 55% (a good start for  caucasian skin tone)
"Here is one basic rule of thumb for average skinned caucasian: 20 points difference between R & B with G as close to the middle as possible. Example 75% R, 65% G, 55% B. For darker skin, numbers will be higher, for more yellow in skin, decrease blue value, etc."

I know by shooting in raw, one can tweak WB for skin tone in post but that would be time consuming.
Same thing for exposure as I don't use auto exposure because it will over or under expo skin depending upon background tone.
The best photographer is able to get everything right in camera!  With ML, it is possible.

I hv tried using the vectorscope to tune in skin tone but I never find it reliable looking at the 135 degree clusters. Perhaps I don't know how to use it correctly??

Pls see this article for reference about the application of % RGB values to correct for skin tone in Lightroom.

https://forums.adobe.com/thread/830740?tstart=0

Quotes:
Q: "That depends extremely heavily on the particular person's skin.   Use a good white balance approach instead."
 
A: "Definitely agree, but there are some very basic estimates that can help get within the range. Then use your eye & a well balanced monitor to tweak and perfect.
 
Here is one basic rule of thumb for average skinned caucasian: 20 points difference between R & B with G as close to the middle as possible. Example 75% R, 65% G, 55% B. For darker skin, numbers will be higher, for more yellow in skin, decrease blue value, etc."

20
Thanks for your reply Audionut once again.
Yes I know percentage has been implemented but it is for luminance only.
It is not for RGB. We need 3 separate percentage numbers.  :)
Thanks :)

21
Feature Requests / Quickie: % readout for RGB spot meter
« on: October 20, 2014, 06:39:59 AM »
Hello ML team,

Is it simple to add a % readout option to the RGB spot meter?
Right now it is hEX: e.g.  CDBFB1
It would be nice to be able to display %R %G %B.   e.g. 80%, 76%, 70%. 
This way one can tweak skin tone easier in camera, similar to using lightroom % display.
Thanks!

22
Tks Walter but I have added battery packs already but this seems a brute force way to make it work.
Many times the recycling is still not fast enough! We are talking about 5D2 burst mode about 0.25S interval here!
There is actually no need to use burst mode with ai servo in a wedding processional.
The only reason burst mode is used is because it allows "focus priority".   But it is not available for the first shot!
This is a dumb design by Canon imho, which is fixed in 5D3.

Is it possible to have ai servo working always in focus priority mode for 5D2? Thanks!

23
Feature Requests / Focus priority on 5D2 during burst mode with ai servo?
« on: October 13, 2014, 04:33:07 PM »
Dear Magic Lantern Team:

Happy Thanksgiving for those in Canada & Happy Columbia Day for those in the US!
On 5D2, is it possible to force "focus priority" for the 1st shot taken in burst mode with ai servo?
That would be great help as 5D2 is defaulted to "release priority" for the 1st picture taken in burst mode during ai servo.
Focus priority is only turned on for the 2nd, 3rd, 4th etc... in burst mode under ai servo.  (assuming only centre point focus is used)

When bounced flash is used (meaning high flash power), many times the 1st pic in the sequence will get enough light yet out of focus.
The 2nd shot in the sequence will get the focus correct but it won't get enough flash light due to the relatively long recycling time of the flash.

Thanks & any workaround/pointers will be appreciated!


24
Feature Requests / Re: Set custom WB based on skin tone
« on: October 07, 2014, 01:35:32 AM »
Quote
Skin tones are so varied, I doubt something can be done in camera.  And in camera is only useful when shooting JPG.

Yes, skin tones are varied but what A1ex did with a raw converter to WB on skin tone was proven interesting and practical.

Looks like the custom WB algorithm is there in Canon firmware already. Instead of making certain tone
looks like gray. i.e. R=G=B as in the case of Custom WB, is it possible to make R = B * 1.5 and G = B * 1.15, as
in the case of Caucasian skin tone? 

It is not only useful for jpg shooters, but it is also useful for raw shooters too, since even raw portrait shooters need to
spend a lot of time tweaking skin tone to make it look pleasant.   With WB set correctly to produce a nice skin tone,
even if it is just for the bride in a wedding, is a huge time saver in post for sure.  :)

I hope we can get more opinions from other experienced portrait shooters too.

Thanks Audionut for your consideration!



25
Feature Requests / Set custom WB based on skin tone
« on: October 06, 2014, 07:12:10 PM »
In portrait shooting, often the correct WB out of a grey card is not the most pleasing WB.
The best WB for portraits is the WB that produces the most pleasing skin tone!

It would be nice to WB based on a skin tone (note not the tradition sense of WB on skin, making skin grey)
When you "WB" on a skin tone, you want that skin to record in a tone the photographer wants.


User Interface:

1) Allow a few presets to store pleasing skin tone values  (input by user).
e.g. EABD9D

(skin tone rgb value reference here:
http://photography-on-the.net/forum/showthread.php?t=591549    )

2) In RGB spot meter view, make sure spot meter is pointing at the skin area you want to
balance. e.g. forehand of a person.  (use averaging of pixels if needed)

3) Press a button to "WB" this skin using the selected skin tone value EABD9D.

4) Done! WB of the camera should now be set such that the next shot will record a skin tone
for the same person a value of EABD9D in the forehead!   I guess it doesn't have to be
exactly EABD9D. Just keep the R:G:B ratio the same when setting custom WB.
This will save a great deal of time in adjusting skin tone in post!
I believe this ML feature would be a game changer for portrait shooters.
Is this possible? Thanks ML team!


Pages: [1] 2 3