both of you are correct.
the best approach is to test code on all levels, which means using:
- static code analysis (haha!! have fun running polyspace on ML core

)
- automated unit tests (e.g. on PC)
- test procedures (on the desk)
- field tests (under real-world conditions)
static code analysisquite important but pointless for register and bit banging code everywhere in ML.
for sure possible for algorithms and maybe some stuff like modules.
unit testsdue to the nature of magic lantern (highly invasive on the system, nearly no abstraction layers), unit tests are only possible in rare cases.
modules for example are (virtually) platform independent and don't (should not) contain any model-specific hacks.
this makes modules a candidate for automated unit tests.
wanted to do that for mlv_rec, but didnt come further than making a concept in my head.
(yeah, C# again

)
test proceduresare important to cover most of the obvious bugs.
you cannot catch all cases of errors with it.
good thing it can partially be automated due to in-camera button faking.
field testsbest but most expensive testing. requires time and effort.
bad thing about it - you get reports like "doesnt work" or "crashes" and you have no more details.
this is the time then to go all the way upward by finding out how to reproduce, then making test procedures, setting up unit tests and analyze the code.
its that simple to get good quality software
