both of you are correct.
the best approach is to test code on all levels, which means using:
- static code analysis (haha!! have fun running polyspace on ML core
- automated unit tests (e.g. on PC)
- test procedures (on the desk)
- field tests (under real-world conditions)static code analysis
quite important but pointless for register and bit banging code everywhere in ML.
for sure possible for algorithms and maybe some stuff like modules.unit tests
due to the nature of magic lantern (highly invasive on the system, nearly no abstraction layers), unit tests are only possible in rare cases.
modules for example are (virtually) platform independent and don't (should not) contain any model-specific hacks.
this makes modules a candidate for automated unit tests.
wanted to do that for mlv_rec, but didnt come further than making a concept in my head.
(yeah, C# again
are important to cover most of the obvious bugs.
you cannot catch all cases of errors with it.
good thing it can partially be automated due to in-camera button faking.field tests
best but most expensive testing. requires time and effort.
bad thing about it - you get reports like "doesnt work" or "crashes" and you have no more details.
this is the time then to go all the way upward by finding out how to reproduce, then making test procedures, setting up unit tests and analyze the code.
its that simple to get good quality software