Author Topic: What more can/should we do with ML?  (Read 3356 times)

garry23

  • Contributor
  • Hero Member
  • *****
  • Posts: 1740
What more can/should we do with ML?
« on: November 27, 2018, 06:09:37 PM »
As a photographer, and not a videographer, I am happy with the state of ML, eg: Dual-ISO; RAW ETTR; Auto Bracketing; and, of course, Lua scripting.
So, I would probably be content if the ML developers, eg @a1ex, said there will ‘only’ be stability updates on the photography side.

In this post, however, I’ve asked myself the question: “What other developments could help the photography users?”

I ask this question, as videography appears well supported by developers and users.

So here are three thoughts from me, as a starter for 10:
1. Relocatable Raw (LV) Histogram with increased readability
•   Why? The current histogram can clash with other ML and user generated (Lua) stuff: it can’t be controlled in its general position. Also, one sometimes wants to be able to see the details of parts of the exposure data, eg at the highlight end for finer adjustment.
•   Thus I would suggest being able to:
a.   position the RAW histogram ‘anywhere’ on the screen – ML menu controlled
b.   scale it up (make it larger), eg x2 or x3 – ML menu controlled
c.   zoom into parts of the histogram, ie from X to Y, ie 0 (black) to 255 (or whatever, as white)  – ML menu controlled
•   Plus, Lua connectivity (see below).

2. Raw Histogram access in Lua
•   Why? At the moment Lua scripters, who wish to explore scripts that can potentially change exposure in a more informed way, eg based on the current/last image, can’t do this at the moment, as Lua has no access to RAW histogram data.
•   Ideally one would like to access the Raw LV feed, but being able to interrogate the last captured image would be a great start.
•   CHDK offers the function ‘get_histo_range <from> <to>’.

3. Lua Sub-image Raw data access
•   Why? Even if 2 above is doable, we would still ‘only’ have statistics on the entire image. An extension to the above would be to allow the Lua scripter to access the histogram info from a defined area of the last image (or the current LV screen if that was possible), eg  ‘get_histo_range <from> <to><x><y><w><h><mode>’
•   Mode could allow this function to be coupled to the current ML/Canon spotmeter or not, thus allowing various ways to control the location of the get_histo_range and define what you want  as feedback, eg so you could use this as a variable sized spotmeter, eg return Ev value of that area or % etc. Or mode could return the ETTR exposure of the area selected. All scriptable from within Lua.
•   With this, Lua scripters can start to explore creating exposure helpers, eg based on the first image taken, take 1 or more exposures. In other words a more informed approach to auto bracketing, ie under Lua user control.
•   As an example, differentiate an exposure for the sky, compared to the ground.

Personally, I remain a non C programmer, so will not be taking the above forward in the core C code. I do, however, look forward to being able to access such functionality in Lua in the future ;-)

Bottom line: this is very much my humble thoughts on stimulating a conversation on what else the we could do with ML (photo) functionality. I’m sure others will add to the list. Finally, this not a request; although, as a minimum, I hope it stimulates some discussion that doesn’t get too personal ;-)

ItsMeLenny

  • Hero Member
  • *****
  • Posts: 919
  • 550D
Re: What more can/should we do with ML?
« Reply #1 on: November 28, 2018, 12:02:58 AM »
I'd like to see being able to link 2 cameras together and have them sync for stereo photography and video.
There was a spinoff of CHDK that did it with point and shoots.
The part of having them sync settings I'm sure wouldn't be hard with an in between device,
it'd be the syncing of the photo being captured that'd probably be difficult.

garry23

  • Contributor
  • Hero Member
  • *****
  • Posts: 1740
Re: What more can/should we do with ML?
« Reply #2 on: November 28, 2018, 07:23:43 AM »
@ItsMeLenny

As for sync, have you looked at this dongle?

https://www.magiclantern.fm/forum/index.php?topic=23006.msg208098#msg208098

Walter Schulz

  • Contributor
  • Hero Member
  • *****
  • Posts: 6762
Re: What more can/should we do with ML?
« Reply #3 on: November 28, 2018, 09:54:33 AM »
I'm not sure about the merits of this thread. Kind of weird to discuss ML development tactic/strategy/roadmap without (core) devs participating ...
Anyway, let's dive into season spirit and make a wishlist.


Dear Santa,

I think this one is already on the to-do-list
- Button handling. Those Toshiba TX19A43CD and FD processors needs to be cracked to get rid of some feature hickups. Mixing up silent pics, intervalometer, advanced bracketing... Permanent DOF button would be fine for silent pic and focus stacking.

Minor thing
- Sound recorder. Con: Today everyone (almost) has a smartphone able to take notes of all sorts.

Big, big wish:
- ML via PTP. All of it!
Not a new wish. Remote controlling ML via smartphone/PC/Raspberry and derivates, maybe Arduino. For drone riders, remote locations (sports), and some studio applications it would make a big difference.
Not sure if (and how) it may interfere with ItsMeLenny's wish.


Your naughty boy
Walter
Photogs and videographers: Assist in proof reading upcoming in-camera help!. Your input is wanted and needed!

a1ex

  • Administrator
  • Hero Member
  • *****
  • Posts: 12236
  • Maintenance mode
Re: What more can/should we do with ML?
« Reply #4 on: November 28, 2018, 10:50:56 AM »
I'm not sure about the merits of this thread. Kind of weird to discuss ML development tactic/strategy/roadmap without (core) devs participating ...

OK then, here's my wish list.

Dear Santa,

Thank you for giving us the knowledge to implement these requests! Limited understanding of Canon code was previously holding us back, but thankfully, now it's pretty much a non-issue! We're not complaining, but would also appreciate some time and energy to actually implement these features, if possible not just during the holidays :)

Here's what I'd want for my own use:

- ISO optimizations to maximize dynamic range (still working on them)
- full-resolution silent pictures without exposure time restrictions:
   - with burst options (e.g. start/end trigger with half-shutter; currently doable, to some extent, from mlv_lite)
   - auto-selection of best images (currently done for 1080p LiveView frames)
   - in-camera blending to get long exposures or higher dynamic range (possibly fully- or semi-automatic)
- simple subject tracking in LiveView (e.g. for manual focusing in x10 zoom)
- powersaving optimizations (battery drains too fast for my taste)
- dual pixel stuff (refocusing, MF indicators, rough depth estimations) for when I'll decide to upgrade to a newer camera
- distance sensor to assist autofocus (that would be a hardware mod I'd like to try, as I'm finding 5D3's AF capabilities completely unusable in LiveView)

Others stuff on my list, not necessarily for still photos:
- capture, organize and annotate detailed low-level logs for various usage scenarios [docs]
- update (auto-generate?) camera comparison tables/charts (e.g. available RAM, sensor readout speeds, card write speeds)
- emulate whatever we understand from the image pipeline, including image review, CR2 capture and LiveView (all in QEMU)
- write tests for every single ML feature in the emulator (relying on user feedback is increasingly hard)
- emulate secondary CPU firmwares, dual core models, GUI on DIGIC 6 and newer and so on (long shot; this would include the TX19A)
- arbitrary resolutions and frame rates in LiveView (PoC available for 700D and others)
- in-camera preprocessing (e.g. for dual iso video with 1x3 readout)
- revive the in-camera help browser (with search capabilities, keywords...)
- revive the "one download for all supported cameras" concept (or at least allow using plain Canon firmware from a ML card prepared from some different camera)
- integrate current developments (QEMU, Lua, recent ports, firmware updates, video enhancements) into mainline

Previous wishes:
- histogram enhancements: WIP
- histogram API for Lua: already suggested here (it's not forgotten)
- stereo 3D: likely easy, as the exposure is started on main CPU in both photo mode and LiveView; I've got the hardware to check timings, btw.
- TX19A/SH2A-FPU: very low priority; I did emulate a few instructions of TX19A some time ago.
- sound recorder: need to revisit the new-sound-system branch and figure out why it's crashing...
- PTP: that USB cable feels way too flimsy; nevertheless, it can be useful for interfacing with some Arduino or RPi, or to minimize card swapping during development. Recent models have Bluetooth LE (remote protocol was figured out) and WiFi. These can probably be modified to allow communication with external devices.

Feel free to suggest things. I'll definitely consider all the requests, but... this year I wasn't in the best shape :(

Walter Schulz

  • Contributor
  • Hero Member
  • *****
  • Posts: 6762
Re: What more can/should we do with ML?
« Reply #5 on: November 28, 2018, 11:22:47 AM »
- PTP: that USB cable feels way too flimsy;

Available in IP67, too: https://www.phoenixcontact.com/assets/images_ed/global/web_content/pic_con_a_0051384_int.jpg

Industry: Let's create a standard interface without locking ability for easy access and removal.
Industry, too: Hey, let's screw it.
Industry: WTF?

Offtopic, SCNR
Photogs and videographers: Assist in proof reading upcoming in-camera help!. Your input is wanted and needed!

nikfreak

  • Developer
  • Hero Member
  • *****
  • Posts: 1129
Re: What more can/should we do with ML?
« Reply #6 on: November 28, 2018, 03:37:13 PM »
...

- ISO optimizations to maximize dynamic range (still working on them)
- full-resolution silent pictures without exposure time restrictions:
   - with burst options (e.g. start/end trigger with half-shutter; currently doable, to some extent, from mlv_lite)
   - auto-selection of best images (currently done for 1080p LiveView frames)
   - in-camera blending to get long exposures or higher dynamic range (possibly fully- or semi-automatic)

these are enough and exactly what I was thinking of when hearing about Google's Night Sight. Go google for examples. I'm amazed.

C'mon give it to me  ;D
No, don't say to achieve it in post on my PC.
70D.112 & 100D.101

Bernie54

  • New to the forum
  • *
  • Posts: 4
Re: What more can/should we do with ML?
« Reply #7 on: November 28, 2018, 05:21:37 PM »
I'm currently using ML on my EOS M to a big extend together with non-CPU lenses. So integrating the experimental "Non-CPU lens info" into the main build would be interesting for me. In addition it might be helpful to hand over the aperture value selected at "Non-CPU lens info" to the camera control system (not only the EXIF data) in order to let the camera "think" that the aperture value was send by the lens. That probably may enable the use of more automated features of the camera like TTL flash, SCN modes or others (not only Av and M mode). However a judgement regarding this idea from the ML experts will be interesting for me.


IDA_ML

  • Hero Member
  • *****
  • Posts: 604
Re: What more can/should we do with ML?
« Reply #8 on: November 28, 2018, 06:59:08 PM »
ML is extremely capable already.  That is why, I am mentioning just two things:
==================================================

1) I would like to see 4k crop recording working on the EOS 7D and this is also the wish of many 7D users.

2) I also would like to see one stable experimental build for every camera model, including all latest developments, that every user can rely on - no crashes, instabilities when changing modes and settings, no camera restarts, battery pulls, etc.  In my opinion, this alone would make ML much more useful and encourage many more people worldwide to use it instead of discarding their older Canon cameras and buying new ones.  This would also be a friendly gesture to Mother Nature.

Walter Schulz

  • Contributor
  • Hero Member
  • *****
  • Posts: 6762
Re: What more can/should we do with ML?
« Reply #9 on: November 28, 2018, 07:09:13 PM »
No offense but 2) reminds me of a CEO asking dev team for "a stable alpha" for CeBIT. No joking. Happened to us (company < 50 employees). Techs went silent and nobody dared to speak up.

There is no such thing as "bug free software". In theory: Yes, within strict limitations. In real life applications: Nope, not gonna happen.
This doesn't imply getting rid off known bugs is useless (quite on the contrary) but there are limits what programmers can do and achieve. No matter how good and motivated they are.

If I remember correctly there was a remark by a1ex about releasing "stable v2.3" to be a mistake. Maybe he wants to share more details about this here. Can think of a reason but who am I to tell?
Photogs and videographers: Assist in proof reading upcoming in-camera help!. Your input is wanted and needed!

dfort

  • Developer
  • Hero Member
  • *****
  • Posts: 3713
Re: What more can/should we do with ML?
« Reply #10 on: November 28, 2018, 07:29:12 PM »
My wish list is short:
  • Update all currently supported cameras to the latest Canon firmware.
  • Enable bootflags (meaning push ML-SETUP.FIR to the main repository) on new ports as early as possible. 1300D should be ready now--maybe 200D, 5D4, 6D2, 760D, 77D, 80D, M50 soon?
  • Start publishing code for newer models as early as possible (e.g., EOS R)
  • Drop support for the older "unpopular" cameras (e.g., 500D, 600D, 1100D)
I'm not saying we should all get rid of our old cameras and buy new ones, it is just that trying to support legacy hardware for a handful of users might be holding back developing new ports.
5D3.* 7D.206 700D.115 EOSM.203 EOSM2.103 M50.102

a1ex

  • Administrator
  • Hero Member
  • *****
  • Posts: 12236
  • Maintenance mode
Re: What more can/should we do with ML?
« Reply #11 on: November 28, 2018, 08:16:22 PM »
If I remember correctly there was a remark by a1ex about releasing "stable v2.3" to be a mistake.

Don't remember the context, but I do remember it was quite stressful to meet the self-imposed deadline, and to postpone the developments in order to do thorough testing. The end result wasn't exactly stable either; nightly builds that followed were generally much better.

Yes, some better release management would help, but I'm not sure I'm the right person for this. Heck, I don't even work in software development, so I'm not even up to date with current practices. In any case, when all I manage to do in months is to post on the forum during breaks from other activities... there's a problem. Unrelated full-time job and ML development don't mix; I really need to change something.

Enable bootflags (meaning push ML-SETUP.FIR to the main repository) on new ports as early as possible. 1300D should be ready now--maybe 200D, 5D4, 6D2, 760D, 77D, 80D, M50 soon?

Yes, high priority for the holiday season. 760D, 750D, 7D2: boot flag already enabled, 80D... PoC already available, DIGIC 7 ready, M50... waiting for testers to reply, 5D4... no volunteer yet willing to assume this risk.

1300D... I need some time to debug it myself in the emulator; otherwise, I could easily publish the FIR and watch other users struggling with a half-working / half-buggy port (much like the EOS M2). Sorry, I'm not convinced current users will be able to handle it on their own.

Quote
Start publishing code for newer models as early as possible (e.g., EOS R)

EOS R: I prefer to wait for Canon's firmware update. Current attempts only resulted in green screen; not sure what to publish.

Quote
Drop support for the older "unpopular" cameras (e.g., 500D, 600D, 1100D)

500D: that's my main target for researching LiveView and image capture. Reason: some debug messages not present in other cameras, and overall simpler code. I've even got its LiveView and CR2 playback halfway working in QEMU.

My plan is to delegate the testing for these old models to QEMU as much as possible, rather than begging other users for feedback.

70MM13

  • Member
  • ***
  • Posts: 242
Re: What more can/should we do with ML?
« Reply #12 on: November 28, 2018, 08:30:37 PM »
It shocks me to see no 5D4 users helping out.

I'm so happy with my 5D3 thanks to all of the "magic" of magic lantern that I don't see any reason to upgrade, but seeing the lack of user participation with such a worthy camera makes me consider buying one just so I can help however I can.

I just don't know how much I can actually help.

Is there a curse on the 5D4?


a1ex

  • Administrator
  • Hero Member
  • *****
  • Posts: 12236
  • Maintenance mode
Re: What more can/should we do with ML?
« Reply #13 on: November 28, 2018, 09:12:22 PM »
Well, 5D4 users probably didn't buy it with programming in mind; they were interested in something that just works. And I'm interested in delegating some of the porting effort to some other 5D4 users => something doesn't add up. Same for all other recent models. All of my cameras are currently collecting dust, including the 5D3 (I'm just taking a few pictures per month, if any), so the decision to upgrade didn't make much sense to me. Maybe later.

From a ML standpoint, the camera is not bad. Readout speed was estimated recently to about 400 MPix/second (50 MHz x 8 channels). EOS R is likely the same. In 4K and 720p120, Canon is driving the sensor at "only" 260 MPix/second, so I might speculate there is some room for improvement, for either reducing rolling shutter, or increasing resolution or frame rate.

Same for 80D and all other 24MPix cameras from Canon - I'm speculating they might have the same sensor as M50, i.e. 4K-capable. Readout speed is likely about 300 MPix/s (possibly 324 = 27 MHz x 12 channels, unconfirmed). Worst case 216 MPix/s (27 MHz x 8 channels).

As a comparison:
- 5D3 has a theoretical readout speed of 192 MPix/second (24 MHz x 8 channels)
- 5D2 has exactly half, i.e. 96 MPix/s (24 MHz x 4 channels)
- 6D has 102 MPix/s (25.6 MHz x 4 channels)
- 7D... apparently 192 MPix/s, just like 5D3 (24 MHz x 8 channels, need to double-check)
- 70D has 256 MPix/second (32 MHz x 8 channels - hint @nikfreak)
- 700D/100D/650D/M have 128 MPix/s (32 MHz x 4 channels)
- 550D/600D/60D/50D have 115 MPix/s (28.8 MHz x 4 channels)
- 500D has 64 MPix/s (32 MHz x 2 channels)
- 1100D might have 128 MPix/s (32 MHz x 4 channels, but default video readout apparently uses 2? I really need to double-check that)
- 5DS: unsure, possibly 24 MHz x 16 channels = 384 MPix/s theoretical speed
- 7D2: will check later; in any case, it's over 300

dfort

  • Developer
  • Hero Member
  • *****
  • Posts: 3713
Re: What more can/should we do with ML?
« Reply #14 on: November 28, 2018, 09:26:51 PM »
Question:
Is there a curse on the 5D4?

Answer:
5D4... no volunteer yet willing to assume this risk.

I figured that dropping support for the "unpopular" cameras would be the least popular item on my list.
5D3.* 7D.206 700D.115 EOSM.203 EOSM2.103 M50.102

IDA_ML

  • Hero Member
  • *****
  • Posts: 604
Re: What more can/should we do with ML?
« Reply #15 on: November 28, 2018, 10:05:42 PM »
There is no such thing as "bug free software". In theory: Yes, within strict limitations. In real life applications: Nope, not gonna happen.
This doesn't imply getting rid off known bugs is useless (quite on the contrary) but there are limits what programmers can do and achieve. No matter how good and motivated they are.

I never said "bug free software".  I am not stupid and know very well that such perfect software does not exist.  I used the term "stable experimental build".  I will illustrate what I mean by that with an example from my own experience.  I used such a stable earler build with the 100D recently to film an event that lasted 6 hours and I never had a single crash.  Thanks to Danne's systematic efforts and a lot of testing and feedback on my part, we were able to fix bugs and iron out instabilities.  The result was a build that worked in a stable enough way to get the job done.  And this is exactly what I meant.  In my opinion, in a close collaboration between developers and testers, creating such stable builds for all camera models is fully doable and this is the way how users of the various cameras, interested in such stable builds could help - simply by testing the different functions of their cameras, identifying possible bugs and instabilities and reporting them to the developers and programmers.  If there are no such users of a particular camera model willing to help in this way, then there is probably no interest in that camera model any more.  In that case, freezing support for that model, might be quite reasonable.

Recently, Dfort sent a short text to me that explains very well my suggestion:

https://en.wikipedia.org/wiki/Release_early,_release_often

g3gg0

  • Developer
  • Hero Member
  • *****
  • Posts: 3143
Re: What more can/should we do with ML?
« Reply #16 on: November 28, 2018, 11:16:10 PM »
OK then, here's my wish list.

Dear Santa,

...give me three clones of myself.
one for taking care of my family.
one for doing my main work.
another one for working on ML again.

so i can finally relax and enjoy life :)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: paypal@g3gg0.de
ONLY donate for things we have done, not for things you expect!

Danne

  • Contributor
  • Hero Member
  • *****
  • Posts: 5634
Re: What more can/should we do with ML?
« Reply #17 on: November 29, 2018, 07:08:34 AM »
Holy Santa.
- Understand ML better code and learn c
- ISO optimizations
- In cam darkframe averaging, or automated creation of darkframe
- In-camera blending
- Real-time fullres preview higher resolutions
- Port a camera(getting unrealistic)
- More time, energy
- Actually film and take photos again ;)
- What g3gg0 said




dfort

  • Developer
  • Hero Member
  • *****
  • Posts: 3713
Re: What more can/should we do with ML?
« Reply #18 on: November 29, 2018, 03:20:12 PM »
...give me three clones of myself.

Ha ha. I'd probably spend all my time arguing with my clones.

Got another one:

Dear Santa,

- Give us a permanent fix to the focus pixel issue.
5D3.* 7D.206 700D.115 EOSM.203 EOSM2.103 M50.102

WishMoney

  • Just arrived
  • *
  • Posts: 1
Re: What more can/should we do with ML?
« Reply #19 on: November 30, 2018, 04:47:25 AM »
Reading this thread made my chest tight. I wish I was rich, so I could support all of you devs, seriously. Unfortunately working as a videographer here in Brazil is not easy.
Since I got a Canon DSLR (7 years ago) I've been using ML. You guys did so much for me and I feel bad for not doing something for you (although I do try to help sometimes - I've used many profiles here, as I often lost the login many times).
Not just ML, but all the free software community (MLVApp devs, looking to you).

So, I would just like to say "Thank you", really.

Dear Santa,
I wish money to support all those kind minds I value so much.


Quote
- ISO optimizations

That would be very nice too :D
The ADTG research is fascinating. Integration with Dual_ISO and Auto_ETTR also would be cool.

zalbnrum

  • New to the forum
  • *
  • Posts: 29
Re: What more can/should we do with ML?
« Reply #20 on: December 02, 2018, 03:50:44 PM »
I wish:

- Clean and accurate real-time preview for 3.5K crop_rec on 5DIII.

Thank you!

(Is that possible @a1ex ?)

a1ex

  • Administrator
  • Hero Member
  • *****
  • Posts: 12236
  • Maintenance mode
Re: What more can/should we do with ML?
« Reply #21 on: December 02, 2018, 06:13:56 PM »
This one is hard, as display buffers on 5D3 appear to be created with the help of a secondary processor named Eeko (unlike most other models). There were minor advancements on 700D, 60D, 5D2 - these models configure the display buffer pipeline from the main CPU.

Lately (last few months) I've been investigating the image capture side (before the preview) on many models, from DIGIC 3 all the way to DIGIC 8, with the long-term goal of enabling arbitrary resolutions and frame rates (within sensor readout speed limits) on all cameras able to run ML (current and future).

Recent models (DIGIC 6 and newer) appear to offload some of the image capture process to the secondary processor (renamed to Omar), so figuring out one of them (e.g. preview on 5D3 or capture on 80D/5D4/etc) will help the others. It's a long-term goal for me, unlikely to be completed during this holiday season.

zalbnrum

  • New to the forum
  • *
  • Posts: 29
Re: What more can/should we do with ML?
« Reply #22 on: December 02, 2018, 07:29:45 PM »
Tnx. Now you have put "Omar" and those graphs that I saw recently on forum to context for me. Sounds complicated, hope you can crack it some day.

So that's why (also why) 5D4 /EOS R are so tough to break...

wadehome

  • New to the forum
  • *
  • Posts: 14
Re: What more can/should we do with ML?
« Reply #23 on: December 30, 2018, 10:06:44 AM »
M50... waiting for testers to reply, 5D4... no volunteer yet willing to assume this risk.

I don't have time to scrub through 500 forums to get the answer right now, but I shoot 5Div and M50, have crazy good insurance, and NEED less crop, less rolling shutter, focus peaking, and every MagicLantern godsend. I can take the risk. Gimme this responsibility pls.

a1ex

  • Administrator
  • Hero Member
  • *****
  • Posts: 12236
  • Maintenance mode
Re: What more can/should we do with ML?
« Reply #24 on: December 30, 2018, 11:09:53 AM »
OK, I'll bite. Do you happen to also run firmware 1.0.4 on the 5D4? Otherwise it may take a little longer.

For M50 I've already got a very good tester; feel free to check out the progress.