Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - BBA

#1
Still there !
Thanks for this great follow-up : would/could not let you down and will not (you all are too good) !
Btw, still working on 5Dm3 LV focus (just for the fun)...so, would like newer AF but, never mind, I will follow devs whaterver will be decided.
Quote
A one-time boost of $5000 or even $10000 is not going to substitute that, sorry.
So wise ! Totally agree.
#2
Reverse Engineering / Re: MPU communication
August 04, 2017, 12:21:33 PM
I am interested in your work and would have some questions about your findings (if possible).

JP79dsfr has shared his work on the EF communication protocol between lens and body : cfr http://www.magiclantern.fm/forum/index.php?topic=18308.msg184575#msg184575 .

It is an amazing work that only a passionated specialist could do : Merci beaucoup, JP, pour ce travail incroyable / Many thanks, JP, for this amazing work.

I think the message you are referring to are the same ones as those referenced in his work.

French is my mother language : I have difficulties in english (sometimes 'google traduction' is necessary).

If you have difficulties in french, I can try my best to translate in english some parts of his work as it is quite impressive (pdf : 95 pages).

I think it is best to translate next to the original text and to keep both in parallel.
#3
Scripting Corner / Re: Lua Scripting (lua.mo)
July 23, 2017, 03:30:07 PM
@garry23

Never mind, thank you for you post : you are welcome !
I agree with you.
Honestly, I am impressed by the results you can achieve with (y)our inefficient workflow... like with the « focus.lua » script.
I have tried the focus bar at home (not last version though) : impressive to see the sharpness that can be achieved when « correctly » focusing : thanks very much for that !!!

One thing is that when « drawing » man machine interfaces, it is important to see the end result : interactive debugging is a necessity because it may be necessary to restart from scratch at any time to better display/interact with things.
Little story : I have coded in Fortran 77+ thousands of lines of code for a « plotting graphics package »: I could  just print, on a listing, the values of the variables to find the errors (between compiling and linking sessions) : It's like blind work but somehow, I liked it because it was somehow like a detective story. I was younger and there was no other way to do that. At the end, I could still find an error a day (plotter pens are very precise). I think It makes you careful at avoiding errors at the time of writing ( when you know the language, which needs tests and errors ).

There are more effective development tools now : with internet, everything is exploding (in french : explosion combinatoire ). It becomes important to be able to follow a correct path in that jungle and avoid the foreseeable dead ends (bad english ?).
I have managed to partially emulate the display module with wxLua in the ZeroBraneStudio environment though I only know a very small part of the wxWidgets capabilities. As it is, it is dirty and wx would need to be studied further because there are lots of useful features there.
That's why I decided to post here : I think it is it a « correct » direction to go but I have no experience at all at a higher level.
I don't want to make a perfect/complete emulation (always a work in progress), only the necessary functions we use.
Maybe I am wrong but the Qemu environment has been said to be usable to test lua scripts.

I use ZeroBrane Studio because it is the Lua environment I have been able to easily push the further (furthest?) for ML lua scripting :
- i can test snippets of code and debug them : it is useful to learn lua which needs to be learnt carefully and deeply;
- it has a « use as scratchpad » project mode where you can  « live » test small scripts, changing values and options in real time;
- the code can be statically analyzed (till a certain point : first use of (undeclared) variables, unused declared variables, unused function parameters,....);
- I can use wxLua with it to draw : I hope to be able to use event driven code to emulate part of the key module and, maybe, a dirty lens module ; I am less interested in menu bars, radio buttons, ....

@chris_overseas

I had not seen your post before.
Thank you; I will take a look
#4
Scripting Corner / Re: Lua Scripting (lua.mo)
July 21, 2017, 11:58:57 PM
I need your advice to help me choose a good environment for debugging Lua scripts before testing and using them in camera.

One way or another, the question has already been asked but I would like to add some requirements (if possible).

The on-camera Lua uses modules libraries to help (thanks very much for that) Lua scripting developers.
I have tried to emulate some of those modules (sometimes in a dirty way I am afraid, but nevertheless it still provides some help).
For instance, the menu emulation is not a priority...

I would really like to emulate a minimum set of the display module functions (to display lines, rectangles, text) and debug the scripts on a desktop computer (I am on a mac).
It would really help a lot.

I have tried to use the "turtle.lua" module in the ZeroBrane studio environment which, in turn, makes use of a wxWidgets module (require "wx") .
This environment seems interesting as some static checks can be made on the code (I am a newbie).
The problem is that the pointers are difficult to follow as only the "reference" address to user data is displayed while debugging : those references can change over time.

What do you think ?
Should I continue in that direction or do you know a better environment to use ?
Is there something I should better use in "Visual Studio Code" ?

Many thanks in advance for your help !


#5
Sigma firmware updates can be downloaded for their lenses on Canon EF mount.... Don't know which processor they use...What do you think ?
#6
User Introduction / Re: Hello from Makedonia
June 30, 2017, 03:52:56 PM
You are welcome.
As Walter Schulz says, the "field" is huge and with the current forum, you don't start the maze from it's easiest entry point.
Don't be discouraged !
 
May I ask you a question:
Do you think it would be "feasible" to get two "truly parallel" beams to simulate rays coming from a source placed at an infinite distance ?
I remember doing that on an optical bench in the past with a converging lens whose focal length was well known, but things should have changed since then .... parallel lasers ?

If the parallel beams are placed at a correct distance, not diverging too much (on a length of a few centimeters to a few tens of centimeters) , this could help in recalibrating at will the image focal distance of a given lens.
What do you think ?
Thanks for any help.
#7
Thank you a1ex for your very interesting answer.

I'll leave it for now as I have other priorities for the moment, but as it seems appealing to me, I'll dig into it later.
#8
@a1ex

Trying to
- build statistics on the distances in sub-steps between transitions for a given lens and, if possible,
- learn how liveview AF works.

Would it be interesting/possible to "trap" (*) some focus moves in usual photo liveview mode (**) to get infos, save them in a file and use them later to build statistics/check them ?
To be useful, the moves should be low level to be as "clean" as possible: a focus move from A to B without submoves inside (to avoid mechanical backlash, for instance).
To get infos on how AF works, the low level can be embedded in a start AF/end AF wrapper.
Everyday life shooting in LV could then be used to somehow "learn" the lens characteristics.
What do you think ?
If too difficult or inappropriate, don't bother!



(*) each time a given event is detected, an "interrupt" is made to execute some home made code and then return back.
(**) I don't see much of them except AF moves in Liveview, which makes it difficult ; follow focus moves seems easier but doesn't give infos about AF.
#9
The idea is to dynamically control the focus distance (maintain the point) when the focal distance changes (zooming in or out).
It should be real time operation as such lenses are used when filming (and not 2 seconds later, after pumping).
Right ?

Just to help : I am not a specialist so I can get wrong...

IMHO, sorry, it is not possible for several reasons :

- precision of the zooming position returned to the camera : this relies on an encoder with a (very) limited number of states : it is only useful for exif reporting, easier optical aberrations corrections (other things I don't know) but not efficient enough in "real time" where the slightest move should be detected for the correction loop to be effective (the lenses are not made for that) ;

This is enough a reason but FYI there are more:

- precision of the focus position : here too there is another (very) limited absolute encoder (and a more precise but relative position sensor) : the big problem is you cannot (at least for the moment but don't expect too much, if only possible) move the focus motor to a given absolute position, in between, on one of the few encoder steps, to achieve a given focus distance : hereto, it is only useful for exif reporting and easier corrections.
The lenses require the bodies and the system is designed to work with autofocus purposes in mind (be it phase correlation, max contrast, dual pixel AF); on new bodies the dual pixel AF makes use of the AF motor but it is always controlled by the AF setpoint (in french "consigne autofocus"); I can only think of the predictive AF system to move the AF motor without direct AF control at each pace but this has not been studied (for rather obvious reason). 

- the lens should be calibrated (that requires to be known at design time to make it ... possible and repeatable) : cheap lenses have mechanical backlash (or "play" : in french = "il y a du jeu")

- there are optical problems (focal distance changing when focusing "focus breathing") which needs to be known at lens design time.

- in parfocal lenses if there is a need for compensation, the "brain" is located inside the lens : communicating with the body would be too slow.

- focus motor itself : cheap lenses have small DC motors with many gears (should be prone to mechanical backlash); other L USM lenses have ring piezoelectric motor (less backlash IMHO); there are now lenses  with linear motors inside, even sometimes 2 of them (cfr new Sony G lenses) which should give higher precision.

#10
@A1ex
Yes, for ML community, it goes without saying : I fully support open source : I am glad if I can help !
But I am not glad with the script as it is :  it needs OO redesigning, it is slow (many lines), ...
#11
Look at my naïve prototype for 7 segments display emulation.

https://www.dropbox.com/s/gvv57it6f9ru6u5/SEVEN_SEGMENTS.lua?dl=0


if you have ZeroBrane studio installed you can run it (ask me how to do it if you want).

It is prototyping => used to see what appears on the screen, the aspect ratio, the colors, distance between the chars,...

When it is thoroughly tested and appears ok, it can be integrated in the bigger project.

#12
luac can detect syntax errors before execution which is great.
#13
@garry23

Think we have to move hand in hand with our devs who know things way better then we do.

I am sure they also want to improve the situation and they know the best way (for us) to do it. Please trust me they are able to foresee and weight the amount of work needed. As things go deeper, naïve implementation becomes insufficient or a real burden which can become even worse than what there is now.

@A1ex

Thank you for your help
I also think QEMU as a web service would be great.
#14
@garry23

a) Thanks for this article,  it is worth reading. Your are not the only user of the focus bar !

IMHO (please, take all of it with a grain of salt : as I reread myself, I feel the following "may" appear like if I were "giving lessons". Don't get me wrong : I write it "in general" (who would I be to give lessons to others I don't even know?); you know I have to struggle to try to give a better feedback and make me understandable  :P .
It would be easier for me to tell you that "everything is fine" or " I like it" or "thumbs up" ...

It is worth reading:
a.0) simply because what is learnt at school is right : it is of good practice to recall the ground basis of one's work : it's good communication. I know you know that because you are a blogger.
In development case, it goes further :
a.1) with software (some sort of expression/communication), one has to take decisions and refer to end users, back and forth, to know if everything suits them (I have done that as analyst and end user) : you make the scripts for your own needs and really know what you are doing (photography), this process is easier than it usually is;
a.2) this intense communication is useful to end users who can then smoothly "interface" with the software.

The following gives "an example" of something you write and that is very interesting to me in practice :
Quote
When the far DoF reaches 'infinity', the focus bar switches its info presentation to show the total blur at infinity, in microns, and the near DoF that corresponds to that blur criterion. This is necessary as far DoF distance now has no meaning, ie the far DoF is at infinity.
The meaning of something changes (for a good reason) : that's important to know.

That's why I appreciate to read your articles and why I am thankful for your efforts that really help me.

b) ease of development

I really hope you have found a graphical development system for Lua scripting because it is much easier, especially when drawing on a screen.
Personally I have found some drawing capabilities in ZeroBrane studio but I am still far from enough : I use routines used by "turtle.lua" which come bundled with it.
It is then possible to emulate the Lua API routines you use with a "dummy module".
In ZeroBrane you can run a chunk of code "as a scratchpad" under the menu "Project": the same script is rerun as soon as you make any modification (like modify constants (colors values,....)) and rework it until it suits you, which can come much faster; I can show you if you are interested.

I think it could also be better to have something like "lua inspect" to check the code and find as much errors as possible before porting to the camera.

Maybe "editor.lua" on EOS M can be useful too for some last typos/changes.

my 2 cents.
#15
@garry23

I used the Lua experimental build => no XXX anymore  :).

Some ideas/questions/feedback/reminder:
- So many options...for newbies like me
- saving the config from session to session : have seen in Lua API / module config : config:create_from_menu , config.data , config:save() , config:load() : I will try if I get the time  :P

(About the manual stacking)
- appreciate your tutorial on Grayheron  :)
- questioning about the best way to "restart" a stack : just set in menu off then on ?
- the magenta line is a clever idea for sequentially shooting from infinity to near field (also in the opposite direction) without adding too much complexity to the display

Hope it can help...(not complete)

#16
@garry23

I see XXXX on the screen => not the Lua module needed.
I will try with Lua experimental build

Thanks !
#17
Don't know if it is the right place to report : it is not the functioning of the script  that is problematic but the functioning of the camera when the script is loaded.

Details (easy to test)

1st configuration (normal behaviour)
Lua.mo disabled
when I switch on the camera I get a black screen (no LV)  as usual ; if I then press the canon menu button, the menu shows up.

2d configuration (weird)
Lua.mo enabled
Focus.lua script set to be loaded
When I switch the camera on, the screen is black as usual.
If I then press the « menu » button, the canon menu does not appear; the screen does not remain black as I can see a little variation of the screen luminosity ; if I press again the button, the screen returns black as usual; this repeats itself as I continue pressing the same button many times in a row)
If I press the LV button and enter the ML menu, it seems to unlock things (the number of « states » of the system increases as one might think and becomes difficult to report).

FYI : firmware 113-ml-Nighy.2 (last nightly ; saw the same problem with a preceding one but could not narrow things at that time)
#18
From one of the masters !
I saw someone's comment " please, let me shadow you " which summarizes our admiration.
#19
@garry23

It's due to the language barrier.

Still don't quite understand your reply : cfr, for instance, "approach to life".
As I google translated part of my post, I thought something could be misleading and be misunderstood.
English and french have language subtleties.

Glad to know it was not the case. So we can go forward.
#20
[removed]
@garry23
I didn't want to hurt anyone and do apologize if it has been the case.
Too bad.



#21
Wow !
Gorgeous images !

Please tell me more (oui, beaucoup ... plus que vous ne pensez) about the technical how-tos !!!
Would be much appreciated !
Thanks !
#22
Scripting Corner / Re: FOCUS tutorial
May 16, 2017, 03:43:50 PM
@garry23

Those tools rely heavily on man-machine interface, which is always a difficult part of a project (lots of trials and errors, subjective appreciation, ease of use,...).
But as an experienced photographer, you know what is needed and what you are doing !
Quote
I think the DOF mode is the real star of the script, as it sits just below the ML top bar, ie as unobtrusive as possible, and provides 'all' the info you need about the focus between the DOF
Yes it is quite effective (ratio infos/dimensions). On the live view screen, I think we have to "draw" things to be able to remove them when they are not needed anymore. Hint : I thought of drawing digits as 7 segments displays do.
Quote
Finally, the obvious caveat is that we 'assume' the lens distance reporting is 'right'
Agreed : it comes from the lens (cfr communication protocol) : the encoder present in the lens and a conversion table provided by Canon : the encoder is not very precise (few steps), so there are few distances : made for AF lenses/bodies.
Quote
I am thinking about a calibration feature
Me too. Still thinking.
There should be some calibration procedures in professional photography, at least for certain lenses ?
Infinity : I thought of two parallel beams (lasers) focusing/interfering on the same focal plane in the image field : the image focal point should be there (good point).
Quote
Maybe this is where some of your working can help?
You are right, but, IMHO, it is far from finished (sorry).
Maybe the interpolation feature between distances could be used if the calibration shows it can be precise enough.
First, let's render to Caesar... A1ex has done most of the decisive work.
He showed the micro-units counter, the hysteresis probably caused by mechanical backlash (at least for lenses with motor+gears) .
A1ex also proposed to use a Kalman filter to get rid as much as possible of the "errors/noise" from unmodeled mechanical behavior (which still needs to be modeled till some point) and "errors/noise" from the ... (few) sensor(s). 
To be honest, I am stuck.
A1ex has lots of important things to do for himself and the community (3K/4K lossless  :), Qemu  :), fast calculations on matrices/images  :), ... ).
#23
Scripting Corner / Re: FOCUS tutorial
May 16, 2017, 01:03:33 PM
@garry23

This article is well written (as always) !
I think it is indeed quite interesting stuff  :) !

Haven't tested all the functionalities so far nor have I checked the Lua code (should be better debugged in a Lua development environment, shouldn't it ?).

Have tried the script with my 24mm; it is indeed very sensitive near infinity and goes from 63m to infinity with a tiny little angular move.
Wish it could be smoother at that crucial point but I know there are only a few encoded distances.

It is quite an achievement as it is !
Thanks again !
#24
For those interested, I found this thread on a german site "Canon-EOS-Protokoll"

http://www.dslr-forum.de/showthread.php?t=649529

As I can't understand german, I will have to "google translate".

I also saw they made a "micro controller based" adapter (to be soldered in the lens) to transcode/translate new Canon commands to pilot an old Sigma lens. 
[added]
http://www.martinmelchior.be/2013/04/conversion-of-old-sigma-lens-to-work.html (in french)
http://www.butterflybikers.cz/index.php/cz/elektronika/item/1-canon-eos-protocol-convertor-for-old-sigma-lens (in english)
#25
@garry23, @a1ex

Very interesting : I very much like  :).
I have tested the preceding version and was astonished by the sharpness.

Quote
(I've also played with cblur - also found on your blog - and noticed their javascript code, but it doesn't look like that code has an open source license...)
What can be copyrighted :
- not the calculations : optics, algebra...
- the interface : the webpage as a whole ?
- something special in the algorithm ?

Always been learning in video, trying to get sharpness for landscapes (from my (now) ugly tests in 8mm/Hi8 then DV/(skipped HDV)/Full HD).... :o :-[ :-\ :P


IMHO (my 2 cents):
- This is the « specialists » man-machine interface : I personally like it as it gives the « behind the scenes » view.
The end user, especially not specialist, would be interested by the sharpness of the end result needed and not how to get it : so, a little « companion script » could ask the good questions (what are your needs...what do you want to do with your photo (high quality print, ...) ) and initialize the good parameters where needed, in the most obvious cases (see the next point).

- I very much like the sharpness of the picture on the front page of Pierre Toscani's  website http://www.pierretoscani.com/index.html (there is a contrast between sharpness and blur that also makes the result). btw, lot of interesting stuff in there (thanks to M. Toscani).

- I would be interested to know where, in micro-units, the lens goes actually :  i.e. try to use the script in lua fix environment and print the lens.focus_pos in a debug file, each time with the offset to try to compare apples and apples (and avoiding mechanical backlash)) ...for later...

- wonder if it can be interesting to calculate things like the depth of/from defocus : by the way, depth of field is « acceptable defocus ». Would like to experiment with that : are there good papers on that, what are the algorithms to compute the DoD DfD (Panasonic's field); would it confirm the results on far landscapes (or, off topic, far objects like the moon). If the blur spot is tinier than a given sensor pixel, it may be different on the whole picture? How can we deal with the different object distances present on the same picture? Is it possible to segment the picture according to the sensivity of the local DoD with focal distance....histogram of DoD...max DoD variation/max contrast...

- was also interested in displaying the curve (a zoomed part and a global curve or tendency) on the screen because it is really meaningful : I only have one little tiny fraction of the skills of a1ex (don't know how many zeros)  ???

Bottom line : Thanks   :) !