Difference between Tragic Lantern and Magic Lantern

Started by mityazabuben, October 19, 2013, 11:13:51 AM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

Walter Schulz

You missed to notice the irony, a1ex ...
If you asked me, I would prefer a consolidiation/merge/unification phase beginning asap.

Q: How can we (as in "non-developers and not going to change status") support you (as in "devs") to make this happen?

Ciao
Walter

Marsu42

Quote from: a1ex on December 11, 2013, 07:16:50 AM
Just a friendly reminder: it's been almost two months and no sign of improvement.

You're talking about TL, or about ML patches in general?

Quote from: a1ex on December 11, 2013, 07:16:50 AM
1) During the next two months I will not commit anything to ML code base other than accepting patches and maybe tweaking them.

Certainly worth a try, every idea that generates more user patches is a good one - I admit it's very easy to lean back and watch you do it, you're just to good at at and know everything about the code - but it is bound to generate the current situations with you and g3gg0 doing all the work.

Quote from: a1ex on December 11, 2013, 07:16:50 AM
2) I will move any ML ports that have not been maintained in the last few months into a "unmaintained" directory (that is, no more nightly builds for them). Anybody is free to jump in and maintain these ports. When the situation improves, I'll move them back.

+1, worth a try, certainly makes sense for 6d/... included in TL. As I stated in the 70d thread, unmaintained builds will damage the public ml image, and anyone who is able to can still compile them instead of using pre-compiled nightlies if I understand your idea correctly.

Quote from: a1ex on December 11, 2013, 07:16:50 AM
3) I will create a separate forum section for third party modifications (TL, a.d.'s 5D2 builds, g3gg0's experiments, Marsu42's tweaks and so on). Basically, anything that lives on its own and is not (yet) maintained by the ML team. Eventually, all the good stuff from these should get merged into mainline, and until then, they should have a chance of getting tested by the adventurous users without competing for attention with the main ML nightly builds.

What I think would be important and what I already suggested in another thread is to move new modules (including some small core additions if required for them to access model-specific consts/values) or even features to the nightly builds much sooner, but label them as "staging" or "experimental" just like with Linux... then add an option to the module loader or ml options to (dis)allow these non-tested modules or parts as this might improve the "features vs. stability" situation.

As for my part in the future, I'll have been using my new 6d for the last months and only hacking ML as really necessary for it to work on the 6d, and I will be rather busy for at least the next half year - so I don't see me bringing my currently personal-only "link ml/canon settings to dial" and "hotkeys" module to a public stage I'm afraid to say as coding takes me much longer than an experienced dev :-\

Except for the occasional comment in Bitbucket or here I should manage to renew the auto_iso module repo, I've been using the new metering code for the last months (changed from alex' Av/Tv, and new with M+EC) and I can say it works fine... and I always see requests on Canon Rumors from people wondering why their camera refuse to do auto iso with flash on.

Edit: Thinking about it, the most important thing seasoned devs can do is the backend stuff (module system, menu system (submenus! :-)) and code advice and review ... leaving more of of the feature-crazyness and model maintaining to newer users/contributors, because I admit something is easily requested, but if you really want it you will probably sit down and code it yourself :-o

a1ex

Quote from: Marsu42 on December 11, 2013, 09:04:49 AM
You're talking about TL, or about ML patches in general?
About TL. Of course, some of my decisions will also touch the other unmaintained ports (since I'm not trying to point fingers at TL or anyone else, but I'd like to fix all these things in the same way).

Quote
What I think would be important and what I already suggested in another thread is to move new modules (including some small core additions if required for them to access model-specific consts/values) or even features to the nightly builds much sooner, but label them as "staging" or "experimental" just like with Linux...
I'll try to address this and combine it with my "feedback matrix" idea. Hopefully it will turn out well.

Quote
Edit: Thinking about it, the most important thing seasoned devs can do is the backend stuff (module system, menu system (submenus! :-)) and code advice and review ...
Indeed, and move towards a clean and documented API. I have a lot of ideas here, but I need to sit down and get a clear picture first.

Quote from: Walter Schulz on December 11, 2013, 08:59:44 AM
You missed to notice the irony, a1ex ...
I'm not a native English speaker, so I'm not sure what I missed.

Quote
Q: How can we (as in "non-developers and not going to change status") support you (as in "devs") to make this happen?
Not sure, maybe by encouraging it and not letting ML rot for months? I'm not going to solve this alone, and if things won't get better, I'll extend my inactivity period until people will realize this should be a community effort and not 1-2 people doing most of the work.

gary2013

Quote from: a1ex on December 11, 2013, 08:32:06 AM
In other words: screw the project health, I just want features!
Everyone knows what I meant by seeing the smiley face. No English required. Of course I meant it as a joke.  ::)

ilguercio

The thing is, A1ex, users like me can just test features and give you feedbacks, contribute with donations and not much more. I am too uneducated to start coding now and when i tried it was always because you explained me what to do (and it was never something complicated, just basic things) .
I have been a lurker for months now, i moved country and i am having a hard time even taking a couple of decent pictures. Still, i like the project and i have been a supporter of it since the first 50D port started to take shape.
So, again, as i said and as you know, you developers can agree on what things to do and how to do them but what is exactly that you expect from users?
Most of the users haven't understood the principle behind ML and they will hardly do in the future.
Myself and others are surely willing to hear any suggestion that you can give us, provided that is something that we can actually do.
Canon EOS 6D, 60D, 50D.
Sigma 70-200 EX OS HSM, Sigma 70-200 Apo EX HSM, Samyang 14 2.8, Samyang 35 1.4, Samyang 85 1.4.
Proud supporter of Magic Lantern.

Audionut

Quote from: a1ex on December 11, 2013, 07:16:50 AM
3) I will create a separate forum section for third party modifications (TL, a.d.'s 5D2 builds, g3gg0's experiments, Marsu42's tweaks and so on). Basically, anything that lives on its own and is not (yet) maintained by the ML team. Eventually, all the good stuff from these should get merged into mainline, and until then, they should have a chance of getting tested by the adventurous users without competing for attention with the main ML nightly builds.


This is a great idea.  It will help to keep discussion specific to these features separate.

a1ex

2 and 3 mostly done. Will leave the rest of cleanup to the moderators.

dmilligan

Quote from: Walter Schulz on December 11, 2013, 08:59:44 AM
Q: How can we (as in "non-developers and not going to change status") support you (as in "devs") to make this happen?

This is not directed at you Walter, but at the community in general (you are one of the most helpful folks on this forum, keep up the good work ;) ):

Test, as in really test. As if you were QC working for a software company. Don't just use ML in normal situations and submit problems when things go wrong (and don't just do another ML RAW vs. H264 test, there are more than plenty of those out there). Try to break a feature. Think of as many possible scenarios as you can. Throw everything you can think of at a feature to break it. Try every value of every setting. Try it in extremely unusual scenes or lighting. Write down your results like a scientist doing an expirement. Then share your results, even if nothing went wrong. It's also helpful for devs to know when something actually works. Test for the sake of testing, with specific intention, not for the sake of making your 'budget short film'.

If you notice a problem, figure out how to repeat it and try to isolate it to a specific build (or if you can compile, isolate it to a specific changeset). Looking at the change log on bitbucket is very helpful in doing this (look for commits related to the feature and check those builds first). This requires no coding ability at all, and can save devs a lot of time, b/c they will know exactly where in the code the problem lies. Even I can figure out bugs that would normally be way above my skill level if you tell me exactly the changeset that is causing it, in fact most of the very few patches I've submitted, I've found like this.

This has been reiterated by a1ex time after time, I'm sure he's tired of saying it. It's clear that very few people actually end up reading the 'how to report bugs' links he posts.

A good example of the lack of this being done is the 600D 'overheating' issue. Countless people have reported this issue. Not one has done anything to help resolve it. Simple scientific-like experimentation can easily help resolve the exact source of the problem without any coding skill at all (e.g. monitor the temp of the camera overtime, both the display temp and with an actual thermometer, try different builds, use the stable build as a control group, etc.). The nightly builds are for testing, nobody actually seems to be doing this, or their idea of 'testing' is: 'use in a production enviroment and complain when something goes wrong'

I think the general response to the troll question "when a stable build?" should not be a link to the faq, but something like: "As soon as you provide us with clear and concise testing and bug reports for all ML features"

Quote from: a1ex on December 11, 2013, 09:56:03 AM
Indeed, and move towards a clean and documented API.
+1
My main stumbling block in terms of trying to help out with development is not lack of coding skill, but rather lack of specific knowlege of the ML code base. I have enough spare time to do some basic coding, fix minor bugs, and add little features here and there, what I don't have time to do is review and understand hundreds of thousands of lines of rather convoluted (understandable, it is a hack after all, and an embedded/RT system) and poorly documented (no excuse for this) code. For example the multilevel submenu thing, I feel confident that I have the coding skill required to implement it myself, the problem is I don't understand menu.c well enough (which I've tried, b/c I'd really like to get it done for my module, I usually just stare at menu.c for 30mins and then give up). At a glance, I have no idea what functions do, when they're called, etc. Sure I could probably eventually figure it out if I took the time, but I don't have that kind of time. A few simple comments before each function (even private functions) would be extremely helpful (like description, parameters, outputs, prereq's, where it's called from, task, etc., and put these comments in a standard format for automatic docs generation, e.g. doxygen).

It sounds to me like, at least to some extent, 1% has done this and that's what he is refering to when he talks about his "notes". If that is the case, then I actually would welcome 1%'s "notes" into the main repo (perhaps they just need to be cleaned up somewhat?). IMO, the more comments and documentation that's in the code, the better it is, as long as it's not 'incorrect' information or in 1%'s own language/shorthand. I haven't reviewed the TL fork code b/c I don't use any of those cams, so IDK, maybe I should. If I manage to get my hands on a 6D (which is very tempting now that it's only $1,500), I am more than willing to help merge the TL fork.


I've been trying to clean up my own repo and learn as much as I can about mercurial. I've learned some good practices and plan on sharing them (in a new thread). Following these tips I think will help everyone who maintains a fork manage their own personal tweaks as well as making it easy to get stuff merged back into the mainline.

a1ex

Quote from: dmilligan on December 11, 2013, 04:16:32 PM
Test, as in really test. As if you were QC working for a software company. Don't just use ML in normal situations and submit problems when things go wrong (and don't just do another ML RAW vs. H264 test, there are more than plenty of those out there). Try to break a feature. Think of as many possible scenarios as you can. Throw everything you can think of at a feature to break it. Try every value of every setting. Try it in extremely unusual scenes or lighting. Write down your results like a scientist doing an expirement. Then share your results, even if nothing went wrong. It's also helpful for devs to know when something actually works. Test for the sake of testing, with specific intention, not for the sake of making your 'budget short film'.

If you notice a problem, figure out how to repeat it and try to isolate it to a specific build (or if you can compile, isolate it to a specific changeset). Looking at the change log on bitbucket is very helpful in doing this (look for commits related to the feature and check those builds first). This requires no coding ability at all, and can save devs a lot of time, b/c they will know exactly where in the code the problem lies. Even I can figure out bugs that would normally be way above my skill level if you tell me exactly the changeset that is causing it, in fact most of the very few patches I've submitted, I've found like this.

This has been reiterated by a1ex time after time, I'm sure he's tired of saying it. It's clear that very few people actually end up reading the 'how to report bugs' links he posts.

A good example of the lack of this being done is the 600D 'overheating' issue. Countless people have reported this issue. Not one has done anything to help resolve it. Simple scientific-like experimentation can easily help resolve the exact source of the problem without any coding skill at all (e.g. monitor the temp of the camera overtime, both the display temp and with an actual thermometer, try different builds, use the stable build as a control group, etc.). The nightly builds are for testing, nobody actually seems to be doing this, or their idea of 'testing' is: 'use in a production enviroment and complain when something goes wrong'

+1, this should be sticky.

Quote
I think the general response to the troll question "when a stable build?" should not be a link to the faq, but something like: "As soon as you provide us with clear and concise testing and bug reports for all ML features"
Agree, updated the FAQ.

Audionut

Quote from: dmilligan on December 11, 2013, 04:16:32 PM
My main stumbling block in terms of trying to help out with development is not lack of coding skill, but rather lack of specific knowlege of the ML code base. I have enough spare time to do some basic coding, fix minor bugs, and add little features here and there, what I don't have time to do is review and understand hundreds of thousands of lines of rather convoluted (understandable, it is a hack after all, and an embedded/RT system) and poorly documented (no excuse for this) code.

My main stumbling block is skill, but a lack of documentation doesn't help. 

To be fair, the new code base that I look at comes with excellent descriptions, such as,

Quoteelse
    {
        /* image is overexposed */
        /* and we don't know how much to go back in order to fix the overexposure */

        /* from the previous shot, we know where the highlights were, compared to some lower percentiles */
        /* let's assume this didn't change; meter at those percentiles and extrapolate the result */

        int num = 0;
        float sum = 0;
        float min = 100000;
        float max = -100000;
        for (int k = 0; k < COUNT(percentiles)-1; k++)
        {
            if (diff_from_lower_percentiles[k] > 0)
            {
                float lower_ev = raw_to_ev(raw_values[k+1]);
                if (lower_ev < -0.1)
                {
                    /* if the scene didn't change, we should be spot on */
                    /* don't update the correction hints, since we don't know exactly where we are */
                    ev = lower_ev + diff_from_lower_percentiles[k];
                   
                    /* we need to get a stronger correction than with the overexposed metering */
                    /* otherwise, the scene probably changed */
                    if (target - ev < correction)
                    {
                        float corr = target - ev;
                        min = MIN(min, corr);
                        max = MAX(max, corr);
                       
                        /* first estimations are more reliable, weight them a bit more */
                        sum += corr * (COUNT(percentiles) - k);
                        num += (COUNT(percentiles) - k);
                        //~ msleep(500);
                        //~ bmp_printf(FONT_MED, 0, 100+20*k, "overexposure fix: k=%d diff=%d ev=%d corr=%d\n", k, (int)(diff_from_lower_percentiles[k] * 100), (int)(ev * 100), (int)(corr * 100));
                    }
                }
            }
        }

        /* use the average value for correction */
        correction = sum / num;
       
        if (num < 3 || max - correction > 1 || correction - min > 1 || correction > -1)
        {
            /* scene changed? measurements from previous shot not confirmed or vary too much?
             *
             * we'll use a heuristic: for 1% of blown out image, go back 1EV, for 100% go back 13EV */
            float overexposed = raw_hist_get_overexposure_percentage(GRAY_PROJECTION_AVERAGE_RGB | GRAY_PROJECTION_DARK_ONLY) / 100.0;
            //~ bmp_printf(FONT_MED, 0, 80, "overexposure area: %d/100%%\n", (int)(overexposed * 100));
            //~ bmp_printf(FONT_MED, 0, 120, "fail info: (%d %d %d %d) (%d %d %d)", raw_values[0], raw_values[1], raw_values[2], raw_values[3], (int)(diff_from_lower_percentiles[0] * 100), (int)(diff_from_lower_percentiles[1] * 100), (int)(diff_from_lower_percentiles[2] * 100));
            float corr = - log2f(1 + overexposed*overexposed);
           
            /* with dual ISO, the cost of underexposing is not that high, so prefer it to improve convergence */
            if (dual_iso)
                corr *= 3;
           
            correction = MIN(correction, corr);
           
            /* we can't really meter more than 10 EV */
            correction = MAX(correction, -10);
        }

I'm still to stupid to understand how the code is working, but with descriptions like so, I'm sure it wouldn't take much strain on the grey matter to work it out.

Marsu42

Quote from: dmilligan on December 11, 2013, 04:16:32 PM
what I don't have time to do is review and understand hundreds of thousands of lines of rather convoluted (understandable, it is a hack after all, and an embedded/RT system) and poorly documented (no excuse for this) code.

There are extremely hackish passages with large blocks of code commented out for one reason or another - but I don't find the core code I looked at and use (including prop-stuff and lens.c, focus.c, gui.c, menu.c, shoot.c, hdr.c, ...) to be that horrible - the main issue that takes time is to understand how ML uses the Canon DryOS functions and props at all.

But I certainly feel that the devs adding core code could be *much* more verbose when commenting what the core code its doing where and why, I know that after coding for ml for a while these things seem trivial and self-explanatory, but they aren't for new contributors.

Quote from: dmilligan on December 11, 2013, 04:16:32 PMI think the general response to the troll question "when a stable build?" should not be a link to the faq, but something like: "As soon as you provide us with clear and concise testing and bug reports for all ML features"

Thanks for your good wrap up and guide for users to improve ml w/o doing coding, but as to the "stable" part in all honesty ML is a currently moving target, so asking "When will you stop adding features and stabilize what it is right now" is not necessarily trollish. Yes, the devs need precise bug reports, but also yes, there is no timeframe or milestone-schedule for the widely used 6d/5d3 at all so I can understand if people new to ML are wondering.

Edit: I'm far too removed from the dev process, but a Linux-like model with a "merge window" and a
bugfix phase sounds reasonable to me, contributors would know about when they should have additions ready, and users would have a rough idea about when the next "as stable as it gets" release is to be expected.

dmilligan

Quote from: Audionut on December 11, 2013, 05:57:04 PM
To be fair, the new code base that I look at comes with excellent descriptions, such as,

Agreed, that is a great example of how code should look. One of the reasons I think modules are a lot easier to develop is that there is a lot of well documented example code like that to go by. If the entire code base was that well documented, I wouldn't be complaining about it ;)

The main issue is that the module system is not yet fully mature, i.e. not all of the things needed by modules have been implemented. So that means if I need to do something not currently possible with a module, I have to dig into the core and implement it myself (or beg a1ex and wait).

Quote from: Marsu42 on December 11, 2013, 06:06:04 PM
"When will you stop adding features and stabilize what it is right now" is not necessarily trollish.
Yes, but that's not what anybody actually ever really asks. I too can understand why ppl would like to know this. I think that if ppl actually did more helpful testing and bug reports, we would end up a lot closer to a stable version, devs would be more likely, and have more time to actually do a feature freeze and a stable release. What incentive does a1ex have to do that when nobody even tries a feature for months.

Marsu42

Quote from: dmilligan on December 11, 2013, 06:44:33 PM
The main issue is that the module system is not yet fully mature, i.e. not all of the things needed by modules have been implemented. So that means if I need to do something not currently possible with a module, I have to dig into the core and implement it myself (or beg a1ex and wait).

I don't see the the need the a hard division between "module" and "core" devs, if you need some core patches, well, just go ahead as I know you did in lens.c - I know it's required for model-specific things, and it's most likely stay this way.

Btw, ot: What I currently want for modules (hint, hint, hint :-)) are submenus and array config variables, this has to be done by someone capable who knows the backend.

Quote from: dmilligan on December 11, 2013, 06:44:33 PM
nobody even tries a feature for months

But in the cases I remember being removed, it were rather obscure features, and the reason for nobody reporting the bug is that they simply weren't used. ML might be "over-featured" in some areas, though I'm against preemptive removal since you never know who uses what.

a1ex

If ETTR qualifies as "obscure feature", no comment...

Critical Point

I think people interested in porting TL to ML should donate to 1% and when enough money are raised so that this project is properly funded, 1% will spend the required time to do the coding. You can not expect the man to spend many, many hours of he's free time just so that we all be happy, when he is not getting payed. That's how this problem should be presented and understood.

1% should have a link for donations to this project alone, and when enough money are raised so that he's effort, time and energy spent are properly funded, then it will be done.  Raising a few thousands dollars for porting TL is not such a big challenge for this community.
600D & GH2 / PC.

Marsu42

Quote from: Marsu42 on December 11, 2013, 07:10:06 PM
But in the cases I remember being removed
Quote from: a1ex on December 11, 2013, 07:25:16 PM
If ETTR qualifies as "obscure feature", no comment...

I was thinking of some features you recently removed like "auto burst pic quality"... as for ettr being broken (was it?) and nobody noticing this might prove that people do not track every nightly but only update once so often?


a1ex

Yes, ETTR was removed for a while from 600D (see here and here).

Actually, it's not ETTR that was removed, but all the raw photo overlays. But since ETTR depends on them, the module no longer linked correctly and got disabled too.

darkstarr

Quote from: Critical Point on December 11, 2013, 07:57:51 PM
I think people interested in porting TL to ML should donate to 1% and when enough money are raised so that this project is properly funded, 1% will spend the required time to do the coding. You can not expect the man to spend many, many hours of he's free time just so that we all be happy, when he is not getting payed. That's how this problem should be presented and understood.

1% should have a link for donations to this project alone, and when enough money are raised so that he's effort, time and energy spent are properly funded, then it will be done.  Raising a few thousands dollars for porting TL is not such a big challenge for this community.

So then you would have 1 dev that's getting paid for working on ML, and the other dev's that are working for free doing the same job.

that wont work.

Marsu42

Quote from: darkstarr on December 11, 2013, 10:15:50 PM
So then you would have 1 dev that's getting paid for working on ML, and the other dev's that are working for free doing the same job.

It wouldn't work anyway, 1% doesn't have the same level of coding and ml expertise as alex (I'm sure 1% does agree). 1% isn't the 6d "maintainer" but is mainly interested to get a "quick and dirty" working 6d distro and happens to share his repo with the rest of us. I'm very grateful for that because I couldn't afford a 5d3, but the current TL/ML problems lie elsewhere than people not getting paid.

l_d_allan

Quote from: Marsu42 on December 14, 2013, 04:42:08 PM
I'm very grateful for that because I couldn't afford a 5d3

Dang ... I wish I'd researched the status of ML + 6d + TL before purchase. I'm half considering returning the much more cost effective 6d (YMMV) and applying the cost to a 5d3.

But I'm near the threshold of WOW (wrath of wife).

For me, the 6d without a reliable, valid, trustworthy ML is almost unusable.  (and I have yet to tackle video, which I understand is benefitted by ML even more than still photographers such as myself)

I'd love for ML to be Rock Solid, mature software, but it ain't. That's not a complaint ... I've come to accept the development speed of "rolling releases".

I am Very Hesitant to "try" TL, and my impression is that a1ex (and 1%  ?) would concur that TL has some (many?)  unsafe coding practices.

So now my 6d is lacking ...

  • auto-tune micro-focus adjustment
  • ETTR
  • Dual-ISO
  • Intervalometer
  • Extended Bulb
  • etc.

BW: I'm getting ready for a "road trip" to western Kansas for full-moon panos of grain elevators. I'm considering leaving the 6d behind, and "making do" with my 5d2. The 6d without a non-bricking ML just is lacking too much. Bummer.

I didn't realize how much I've come to rely on ML. (and THANKS to the devs!!!!!)


l_d_allan

Quote from: Marsu42 on December 11, 2013, 08:43:20 PM
people do not track every nightly but only update once so often?

That's very much my practice. At first, I was so amazed at what ML provided, that I was inhaling everything I could find. "drinking the Kool-Aid"?

But real life intrudes. Got commitments to get pictures taken and p.p.'ed and printed.   Oh ... that?

And I'm a non-professional, retired hobby'ist.

On average, I probably download and install ML every other week, or so. Then grapple with What's New and What's Changed.

"May you live in interesting times" ... to be a Canon photographer.


Marsu42

Quote from: l_d_allan on December 14, 2013, 06:37:38 PM
I didn't realize how much I've come to rely on ML. (and THANKS to the devs!!!!!)

Same with me, that's why in spite of alex' warning I added TL to my 6d after trying to shoot one day with the crippled auto iso (only iso 400 with flash) and w/o the other usability enhancements of ML.

Many people are using TL, and none seems to have bricked the camera yet as 1% always points out ... the problems is rather that the more the peer-reviewed ML codebase and 1%'s TL code differ, the more likely it'll be to break sooner or later due to the code degenerating... but it's not at that stage yet.

l_d_allan

Quote from: Marsu42 on December 11, 2013, 06:06:04 PM
But I certainly feel that the devs adding core code could be *much* more verbose when commenting what the core code its doing where and why,

Not sure I completely agree with you. Comments can get "stale" very quickly, especially with very dynamic code like ML. Bad comments can be worse than no comments.

My experience was that some keys to maintainable code with several people involved:

  • Meaningful names for variables, parameters, functions, subroutines, modules, etc. I started programming with Fortran-66 in 1968 with 7 or 8 letter names. Those days are loooooong gone.
  • Little or no "cascaded logic" which hurts debugging ... if (isOk(true, 42, -42(isLate(42, isObfuscated(true, 42))))) {
  • Code so that the debugger is the most helpful ... simple statement per line so you can see as much as possible about what's happening (but I had a tendency to be over dependent on debuggers ... back in the day)
  • Let the compiler do the optimizing
  • KISFTSSM ... Keep it simple for the simple/stupid maintainers
  • D Knuth: premature optimization is the root of much software evil

Note that I have never glanced at ML code. I suspect it is far about average wrt code quality, but that may be "wishful thinking" on my part.


dmilligan

There's no excuse for not having a brief description including parameter descriptions and output of every function.

Quote from: l_d_allan on December 14, 2013, 09:56:03 PM
Not sure I completely agree with you. Comments can get "stale" very quickly, especially with very dynamic code like ML. Bad comments can be worse than no comments.
That is a completely invalid point IMO. The potential for bad documentation is no reason to not document at all. Large swaths of the core code changes very little anyway. Some particular functions that I wonder what they do were written years ago and not changed since.

Quote from: l_d_allan on December 14, 2013, 09:56:03 PM
Meaningful names for variables, parameters, functions, subroutines, modules, etc. I started programming with Fortran-66 in 1968 with 7 or 8 letter names. Those days are loooooong gone.
No amount of good variable naming can make blocks of assembly easy to understand.

Quote from: l_d_allan on December 14, 2013, 09:56:03 PM
Code so that the debugger is the most helpful ... simple statement per line so you can see as much as possible about what's happening (but I had a tendency to be over dependent on debuggers ... back in the day)
If we had a full emulator working this might be a valid point. But there's just no way to 'debug' much of the ML code (e.g. explain to me how to run a debugger on a boot-loader running as a hack on an embedded system), again this is 'real time' stuff, debugging is quite a different beast than what you are probably used to with traditional programming.

Quote from: l_d_allan on December 14, 2013, 09:56:03 PM
Let the compiler do the optimizing
That's not always practical when you've got extremely limited resources and you're in a 'real time' scenario. Sometimes you have to specifically prohibit the compiler from optimizing things (i.e. values stored in hardware registers, etc.).


Maybe you should actually have a look at the code base before you render opinions about it. This is a hack that runs in a real-time environment and calls reverse engineered code. This is very very different from traditional software programming.

evero

Quote from: a1ex on December 11, 2013, 07:16:50 AM
Discuss.

Just a perspective from a doc filmmakers view:
I think it sounds like a great idea to consolidate and simplify! From a typical end user point of view, magic lantern is still a hard to get into because the potential is so great, but the technical barriers to start using it is challenging for many people (just my perspective ofcourse). Not necessarily the steps involved to actually use it, but the sum of the uncertainty of how stable it is, what build to use, and then the steps involved to "install" it. And that keeps it from getting into the hands of many users (who could contribute to the project - mostly feedback and donations ofcourse).

I think the project would attract a larger userbase if it was possible to make the ML releases into something that separates the more stable functions and the "in development" functions:

Maybe I'm way off here, but what if it was possible to make e.g. a 5D3 build (or whatever cam) with core functionality like focus peaking and zebras (maybe some other stable functions) that is ok to indicate as "more stable". ALL other functions like RAW etc can be marked as experimental/alpha (e.g. clearly separated in the menus), so people know when they move from the more stable to the other functionality. In that scenario, maybe it would be possible to brand it a beta release? (using myself as a case, I would definetly jump onboard using ML, seeing a beta tag, just knowing that focus peaking and zebras were beta-stable)

I'm just thinking like this because I know from my own experience that I'm hesitant to use ML at all, with so many builds, different statements about what build to use, if the released alphas (which are quite old now) is safer to use etc. etc.

I'm sorry if these ideas are of limited value, just some thoughts from an end user. Also a huge thanks to all contributors in the project. I'm speechless of what's beeing achieved, and I really hope it will reach an even bigger audience soon :)