Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - dmilligan

#1
General Development / ARM on ARM
January 06, 2017, 10:54:48 PM
Recently I acquired a Raspberry Pi 3. was thinking of maybe using it for ML development, question is how to setup the toolchain. Can I just use the built in gcc somehow? Or will I need to compile it (sounds hard and time consuming, probably beyond the amount of spare time I have)? Or maybe there's prebuilt option that I can use? I don't have a lot of expertise on all the ABI stuff and differences between gcc versions/configurations.

This post made me think a1ex was maybe already doing this:
Quote from: a1ex on January 06, 2017, 02:37:21 AM
I'm running a derivative of it on Axiom Beta (ARM processor, Arch Linux).

There are some ARM tablets running Linux as well.
#2
Scripting Corner / Ramp.lua
July 16, 2016, 10:01:44 PM
This is a script that is intended to replace adv_int.mo. It's not quite complete yet, but creating keyframes is much easier (custom GUI).

Code: https://foss.heptapod.net/magic-lantern/magic-lantern/-/commits/branch/ramp_lua

To try it out, simply copy ramp.lua to ML/scripts and class.lua and config.lua to ML/scripts/lib directory
#3
Modules Development / MLV Lite
February 15, 2016, 03:42:22 AM
MLV Lite
The sweet taste of MLV with none of the extra calories™



Recently I've been thinking about ways to do away with the old RAW file format (it has a lot of problems). I had originally thought of creating a sort of hybrid between RAW and MLV formats, with an MLV header at the beginning and then just a dump of raw data after that like the RAW format, but a1ex helped me realize that's it not hard at all to modify raw_rec to generate completely valid MLV files ("MLV Lite" naming credits also go to a1ex). So, that's what I've done, and it seems to be working. Now it needs just needs some testers.

Download: raw_rec.mo (now available in nightly builds)
PR: https://bitbucket.org/hudson/magic-lantern/pull-requests/685/proposal-completely-replace-the-old-raw

There should be virtually no difference between the way this raw_rec operates and the one in the current nightlies with the sole exception that you will get MLV files out of it. You should see almost identical performance and behavior.

There are some caveats to what you get compared to the full mlv_rec:
1. Metadata is only for what the settings were when recording was started (e.g. you get expo metadata, but if you change exposure, you don't get that new information).
2. No audio
3. No card spanning (recording to both CF and SD card)

This needs to be thoroughly tested so we can convince the main devs to merge it. Here are something things you should test:
1. High FPS
2. File splitting at 4GB limit
3. What happens when card runs out of space?
4. Heavy CPU usage options (GD overlays, crop mode preview, etc.)
5. Metadata is correct?
6. Try various MLV converters to make sure they can all handle the files
7. Write performance comparisons (this raw_rec vs. previous raw_rec vs. mlv_rec)

Make sure to fully describe every situation you tested and the results (whether or positive or negative) and tell which camera you're using, and it'd be good to have tests for both 5D3.113 and 123.
#4
Scripting Corner / Lua Scripting (lua.mo)
March 29, 2015, 04:44:07 AM
Lua Scripting Module



Run lua scripts in your camera!



Source: https://bitbucket.org/hudson/magic-lantern/branch/lua
Documentation: http://davidmilligan.github.io/ml-lua/
Current State: Merged!

Download:
Lua is now part of nightly builds!

Looking for an IDE? http://code.visualstudio.com/ + Lua extensions

NOTE: ML uses Lua version 5.3 (which is rather new and not widely adopted yet). It is very similar to 5.2 (the currently widely adopted version), the key difference has to do with integer division support. Older versions of Lua always use floating point division and always promote integers to floating point when doing division, 5.3 allows keeping integers as integers and doing integer division. I decided to go with 5.3 because of this fact. For our resource constrained devices, the ability to use integer division is very helpful (floating point division is much slower). Use a double slash (e.g. "2//3") to indicate integer division.

Can I help?
Perhaps you've been wanting to help with ML development, but writing low-level C code or digging through all the ML code to figure out how to do stuff is a little over your head? Well, I could use some help writing unit test scripts to test the scripting engine and API, and this should be a very easy task (just need basic knowledge of lua, a simple high-level scripting language). You don't even need to compile this branch, just write your test script and post it in this thread and I'll run it (though if you want to run yourself, feel free, request a build here, or compile in the cloud, just be aware things are very unstable at this point). I usually just barely have enough spare time here and there to simply implement a few API functions, and not really enough time to actually test that they work, so you may find that portions of API not working, this is why I really need your help. Thanks!

What to test and how?
I would prefer automated scripts that are like unit tests that I can just run and they spit out errors if something is not working as expected (hint: use the assert() function). For example to test file IO, try to write some data to a file, then read it back, if it's not the same data, throw an error.  Though, this is not always possible, so a script that simply exercises some particular API function is also still helpful. Things that need testing: any API function, menu stuff (make sure all the fields work as expected, see menu example), events stuff, file IO (use the builtin lua io library for this, note that stdin/out/err do not work, I know this), make sure all the constants are correctly defined, performance and resource utilization, etc.



Original Message:

Well I just managed to get lua running as module. Right now it's just "hello world", but I thought I'd share my progress.

https://bitbucket.org/dmilligan/magic-lantern/branch/lua

I'm quite familiar with lua as I implemented it inside a project at work, even wrote a fancy IDE/debugger. So now that I have the library compiled and running, creating an API should be fairly easy and straight forward.
#5
Compiling Magic Lantern in the Cloud


Just found this neat solution for compiling and editing ML "in the cloud". There are several different online services that provide free linux VMs for coding. I managed to get ML to compile in one of these with minimal setup.

Here's a simple step by step guide



  • Head over to http://www.codio.com and setup an account (you get a VM with 2GB of hard drive and 256MB of RAM for free).
  • Create a new project with the default "stack"
  • When the project opens, goto Tools > Install Software and install 'zip'
  • Now go to Tools > Terminal
  • Copy and paste in the following command* (thanks g3gg0):

wget -q -O - http://pastebin.com/raw.php?i=jfVXzw1a | sed "s/\r//g" | bash



Now you are all set to compile, cd to your specific camera's directory, for example:

cd platform/550D.109/


Then:

make clean && make zip


Once you finished building, to download your files from the VM to your computer go to Project > Export as Zip

For more info on how to checkout other branches, merge and do other version control stuff see: http://hginit.com and also http://www.magiclantern.fm/forum/index.php?topic=9524.0

Happy Coding!


Next Steps: figure out how to setup QEMU in there



*here's the original script for reference:

#!/bin/sh
cd
curl -OL https://launchpad.net/gcc-arm-embedded/4.8/4.8-2013-q4-major/+download/gcc-arm-none-eabi-4_8-2013q4-20131204-linux.tar.bz2
bzip2 -d gcc-arm-none-eabi-4_8-2013q4-20131204-linux.tar.bz2
tar -x -f gcc-arm-none-eabi-4_8-2013q4-20131204-linux.tar
curl -OL http://prdownloads.sourceforge.net/docutils/docutils-0.12.tar.gz
gzip -d docutils-0.12.tar.gz
tar -x -f docutils-0.12.tar
cd docutils-0.12
python setup.py install --prefix=~/.local
cd
export PATH="$HOME/.local/bin:$PATH"
echo "export PATH=\"\$HOME/.local/bin:\$PATH\"" >> .bash_profile
cp .local/bin/rst2html.py .local/bin/rst2html
cd workspace
hg clone -r unified https://bitbucket.org/hudson/magic-lantern
cd magic-lantern
#6
MLVFS - Magic Lantern Video File System



What is it?
An MLV "converter" that provides a virtual file system of converted data 'on the fly' using Filesystem in Userspace (FUSE). It allows you to "mount" an MLV file(s) which shows up as a directory of converted CDNGs. The data for these CDNGs is provided 'on the fly' by MLFVS as the data is being requested by whatever raw editor or post processing software you are using.

It's fast and efficient, and I've managed to get real-time playback in Premiere/SpeedGrade (on my 2012 MacBook Air!).

The code is written in C and works on Linux, Mac (via the OSX port of FUSE http://osxfuse.github.io/ ), and Windows (via the dokany project: https://github.com/dokan-dev/dokany thanks to g3gg0)




Advantages
- You don't have to convert first, you more or less "instantly" have "converted" DNGs
- You don't have to choose between keeping the original MLVs or converted DNGs or doubling disk usage to keep both.
- There's no need for a GUI, there already is one provided by the OS (i.e. the OS's file browser).
- Possible to vary the quality, pre-processing, bit depth, etc, of the DNGs on the fly




Updates
2014-09-02 Working Proof of Concept => There's much more work to be done (see below), but there is a working proof of concept, and a Mac build is available.
2014-09-08 Audio is working!
2014-09-09 Mac installer/service
2014-09-19 White balance mostly working
2014-09-20 AE working! and filesystem is now read/write (files are stored in .MLD directory in the real filesystem)
2014-09-26 Vertical banding correction
2014-09-27 Chroma smoothing
2014-10-08 Bad pixel correction
2014-10-09 Embedded mongoose webserver for configuring options at runtime
2014-10-13 Full quality dual ISO working (cr2hdr-20bit)
2014-10-30 Auto-linked audio in DaVinci Resolve
2015-02-05 Compressed file support
2015-02-17 Animated GIF Previews
2016-02-05 Fix for Focus Pixels
2016-02-07 Full Windows Support thanks to g3gg0 (via dokany)
2017-05-05 Lossless JPEG Support thanks to bouncyball



Source Code
https://bitbucket.org/dmilligan/mlvfs




Download
Mac
Install OSXFUSE
Download MLVFS.dmg
Double click the "MLVFS.workflow" file and you should see the following prompt:



(Click Install)

Now, you can right click a folder (with MLV files in it) and go to "Services" > "MLVFS"



You will be prompted to select a mount directory (it must be empty)
To stop mlvfs, simply click the eject button that shows up next to the mount point

If you get a security error, you may need to change your security settings to allow unsigned/downloaded programs to run

Linux
Install FUSE via the normal way for your distribution. It's probably called something like libfuse-dev

apt-get install libfuse-dev

Download the code with git and compile using make

Windows
Install Dokany
Download MLVFS_x86

Usage
Start it via the command line like this:

cd <mlvfs_exe_dir>
mlvfs.exe <mount point> --mlv_dir=<directory with MLV files>





Web based GUI





Known issues and things not yet implemented
- Audio
- Raw image pre-processing
    - hot/cold pixel correction
    - vertical banding correction
    - chroma-smoothing
- Support for compressed MLV files
- I'm sure there's more stuff...




Contributors
ayshih
dmilligan
g3gg0 (Windows port)

Credits
ML developers (esp. g3gg0 and a1ex for designing the MLV format and providing reference converters)
Developers of open source MLV=>CDNG converters: chmee and baldand => your work and source code was a great help in understanding the CDNG format and requirements




Some helpful tutorials from @reddeercity and @DeafEyeJedi



https://vimeo.com/168433918




Original Post
Quote
So, what would be the 'best' or 'easiest' possible workflow for MLV? Native support for MLV format in whatever raw video processing tool you use (via open source code!), obviously.

There are ways to do this, many programs provide an API for third party plugins to implement something like this, but any implementation would be a lot of work and limited to working with one single app (as APIs are going to vary). Then I had the thought, what if we went down to a lower level of abstraction that all apps will have in common: the file system itself. So I started looking around the internet for an easy way to create virtual file systems, and found this awesome API that allows you to do just this in a user land program: FUSE

So the basic idea is that you can "mount" an MLV file(s). The converter program provides info and converted data to the OS "on demand" for this virtual file system. This is sort of like how you can mount a disk image or traverse into zip files in windows without actually extracting it.

Here's how it works: you just start the program, point it to some 'live' directory (where you can drop MLV files) and give it a mount point. The mount point directory then shows up as having folders for each MLV file containing CDNGs for each frame.

So I have started working on this and I have some sort of VERY preliminary a proof of concept working. It is by no means usable yet, there are all kinds issues and things not implemented yet, but I can at least setup the file system and provide some semblance of the data that's going to be in the DNG (not readable yet, b/c I haven't created the DNG header with all the correct tags and such, and there are probably issues with my bit unpacking code) so that I can test speed and such. Basic DNG conversion is working and it seems to be pretty fast, so far it's not really noticeable that you're using a virtual file system.

If you're a coder and would like to help, that'd be fantastic, esp if you have a good knowledge of the DNG spec (I don't and I'm learning all of this as I go).
#7
Based on this discussion I've created a module to make calculating exposure for Neutral Density (ND) filters and long exposure photography easier.



This module will add a new setting to the Shoot menu. Here's how you use it:

  • Set the ND filter strength in the menu.
  • Meter however you like without the ND filter attached (ETTR or Tv/Av or just manual).
  • Attach the filter
  • Hold SET. Camera will switch to BULB mode and take a pic based on current expo settings and ND strength.

Source code: (already merged)

Obviously you will need an ND filter that is easy to attach and remove for this method to be effective. Or alternatively you can use an ND filter that is adjustable => adjust it to minimum to get expo settings, put max - min as the value in the menu, then adjust to max and hold SET.

You can also measure the actual strength of your ND filter (or filter stack). Select the 'Measure ND' option and follow the instructions.

(I'm also open to suggestions for a better name => 8 character limit)
#8
Building upon the full-res silent pictures capability recently discovered by a1ex, I've made a luminance triggered mode. It works by continuously taking full-res silent pics and then saving one when a change in scene luminance above a certain threshold is detected. My primary motivation is capturing lightning, but there are other applications (such as more generic motion trigger, or a trap focus mode could also be implemented). I could use some help testing this code, a lot of getting this to work is going to be finding settings and thresholds that work the best.

Code here:
https://bitbucket.org/hudson/magic-lantern/pull-request/570/luminance-triggered-fullres-silent-pics

Enable the trigger in the silent pictures menu and then go to LV and press half shutter. A sort of slow, B&W live view from full-res silent picture data will be displayed. Each time a change in luminance above a certain threshold is detected, the image that was just captured will be saved to the card. Press half shutter again to stop.

I'm not going to post any builds, this is code that I would consider quite unstable. If you're going to test this, you need to know what you're doing, at least for now

Here it is running on 1100D:


I imagine that this will work pretty well at night or in dark scenes where you can have longer exposures and greater change in luminance from the lightning, but during the day it may have more trouble triggering, and the shorter your exposures, the greater the chance of missing a bolt.

It's probably a good idea to turn global draw off (unless you want preview, then you need it on, but you should turn all the individual overlays off).

If somebody has some 60p footage of a lightning strike it could be helpful for testing. Feel free to PM me.

Please note the limitations of full-res silent picture mode:
Quote from: a1ex on July 01, 2014, 05:11:15 PM
Limitations

- The fastest shutter speed I've got is around 1/10 seconds (very rough estimation by comparing brightness from a regular picture). With regular pictures, faster speeds are done via mechanical shutter actuation.
- Long exposures are fine up to 15 seconds (longer exposures will crash the camera).
- Fastest capture speed: 220ms on 5D3, 320ms on 5D2. This includes a dummy readout, which is probably a bias frame.
- So, at least for now, the usefulness is limited to timelapse and medium-long exposures (no moving subjects).
- If you use the intervalometer, I recommend taking a picture every 10 or 15 seconds (not faster). Saving DNGs from the camera is slow.
- In photo mode, aperture will be most likely wide open, regardless of the setting, because of exposure simulation (enable Expo Override to fix it).
#9
MLV and RAW File Format Silent Pictures

With the latest full resolution silent picture development, saving these full resolution DNGs is still quite a slow process.

Now you can save these full resolution images in either of the raw video formats.  This results in speeding up the saving of the images, and also provides the benefit of all images in a timelapse, being captured in one single file for post processing, rather than hundreds of separate files.



source
silent.mo  Not for use with nightly builds, requires a build from the full-res-silent-pics branch

Some mlv/raw video converters may have trouble with the large frame sizes or OB areas.
You must load mlv_rec module to use the MLV option.
Intervalometer sequences and bursts will be stored in a single container (up to 4GB, then you'll get a .M00, .M01, etc.)


Rather than cluttering this thread with build requests, please request a build in this thread.
#10
Share Your Photos / Non-Video uses for Raw Video
June 22, 2014, 09:43:05 PM
Long exposures in the daytime without an ND filter:




I can't use my ND filter with my fisheye lens due to the curvature, so I decided to use raw video instead to approximate a long exposure. One added advantage is that stacking all those individual exposures greatly reduces shadow noise, thus increasing the effective amount of DR I can capture, and there's a large amount in this particular scene (the right wall is brightly lit by the sun and the left wall is in deep shadow). The ACR settings used were +2ev, +100 shadows, +50 blacks. Even with these settings the shadows are very clean with the exception of some banding due to sensor FPN. With proper black frame correction that could be taken care of as well. The main issue is the low resolution and the aliasing in non-crop mode, so I decided to do another in crop mode where I could get a little higher resolution and less aliasing (second photo). Obviously this means I had to sacrifice some FOV, which kind of defeated the point of using the fisheye, but this technique could perhaps be useful for others, esp. on some of the higher end FF cameras, or for those who have no ND filter at all.

Both of these were taken with a 60D and MLV raw video. The first is 1728x972 (60D max res, non-crop), and the second is 2240x1080 (max res, crop). FPS override to slightly under-crank (I don't recall the exact amount but I think I used something like 18fps), and "low-light" mode to get as large a shutter angle as possible. The first is 212 frames and the second is 54 (doing the math that's an effective exposure time of ~11 seconds for the first shot and ~3 for the second). Post-processing: converted to DNG with MLRawViewer. Then I stacked the frames using PixInsight (software I typically use for astrophotography) to a 32-bit floating point TIF and then toned in ACR/Lr.

(of course there are other non-video uses of raw video, escho is doing some amazing stuff with astrophotography)
#11


Camera Equipment:
Canon 60Da
Rokinon 8mm Fisheye
AC Power
chemical handwarmers for dew control

Setup:
Interval: 45s
AutoETTR
Slowest Shutter: 32"
Shadow SNR: off

Post:
Adobe Bridge/ACR
Deflicker with my script for Bridge
Render with AE

If you want more detail on my process feel free to ask.
#12


1100D, Canon 10-22mm @ 10mm, f/3.5
AutoETTR, Interval time: 20s, Playback speed 12fps + AE frame blending to get to 24fps
Deflickered with my Adobe Bridge script

This is just about the most challenging scene I can think of for AutoETTR and the deflicker script to handle, and they performed superb.
#13
Share Your Photos / Astrophotos with a new lens and ML
January 07, 2014, 04:29:48 PM
I recently aquired a Canon 200mm f/2.8L. I am rather pleased with it:





The first image is a stack of 24 x 300 second exposures (2 hrs), the second image is 12 x 300s (1 hr). Shot on a Canon 60Da at ISO800. Ambient temperature was about -5C.

Shot using the ML Intervalometer + Bulb Timer
Manual Focus using LV FPS override = 2fps, Low Light mode, 10x Zoom
Dark Red overlays to save my night vision

The camera and lens were mounted to my telescope's equatorial mount to allow the long exposure times. The images were processed using PixInsight and Photoshop CC.

Here's a wider angle star trails shot I did with my 28mm lens:


(Click on the thumbs for full size)
#14
General Development / Mercurial Tips
December 11, 2013, 08:38:25 PM
ML was my first foray into the world of distributed version control. I have learned a lot from reading and making mistakes, and I thought I'd share what I've learned and my general workflow here in the hopes that it will be useful to other DVCS newbies.

If you're not already an expert on mercurial, see: http://hginit.com/
I assume you already know how to do basic stuff like clone/add/commit/branch/merge etc. I also recommend finding a nice GUI that you like (I use SourceTree, its nice, but Mac/Windows only and IMO the mac version is superior to the windows)

#1: Branches over Forks
In DVCS there's not really much difference between a branch and a fork (clone). You can use both to work on something outside of the main 'trunk' and then merge it back later. But there are some very good reasons to use branches rather than multiple forks (like I see a lot of ppl do, and like I did starting out). The main reason is that it is much easier and faster to switch your working copy between branches of the same repo, than it is to change repos entirely. This comes in very handy (more on that later). You don't have to submit a pull request (PR) from your 'unified' branch to the main repo's 'unified' branch, you can select any of your own branches to submit a PR from, which leads into the next point...

#2: Don't commit to 'unified' in your fork
If you keep unified (the main branch) 'pure and undefiled' then you know every time you click the 'sync now' button in your bitbucket fork (and then pull to your local), 'unified' will contain exactly the current state of the main repo. If you want to do a 'quick-fix' patch to the current main, you can just create a new branch off of 'unified' at the current point in time, and submit that branch as a PR without worring about it containing any of your other, unrelated WIP or a pending PR.

Another use of this is that you can easily switch to unified and build if you want test that an issue is being caused by ML main or your own code. Switch to 'unified' and build, then run some tests, switch back to your own branch, build and test. Now you know the source of your issue. If you have uncommited local changes when attempting to switch branches, you can easily 'shelf' them and restore them later.

This also allows you to keep using the same fork and keep it up to date, even if you have a PR pending for a while (which happened to me frequently).

#3: Keep your personal tweaks in their own branch
We all have our own personal tweaks for ML (disable the warning screen, flexinfo customizations, etc) that would never get merged. To avoid them finding their way into your PRs, keep them in a separate branch. This makes it easy for you to still keep them under version control and easily merge them with updates from your other feature branches (that are intended to be PRs) and changes from the main repo.

#4: Use a 'working' branch
So now you've got a bunch of branches for each different feature you're working on and for your tweaks. You say: now I can't use all of them together for my own working version of ML that I actually use day to day out in the field. Wrong! This is where the 'working' branch comes in. The purpose of this branch is simply to create your 'working' version of ML, with all your changes, pending PR, and tweaks all combined. All you do is simply create this branch and merge all of your other branches to it. Don't ever work on this branch or commit (other than merge commits) stuff to it. It's simply there to be merged onto, like a 'branch aggregator.' (You can even pull other dev's tweaks that you want to use into your working, like 'unsafe' stuff from TL)

Make your changes in other branches and then merge them into this branch. This lets you keep you changes separate (so you can easily PR them separately). When you are testing and working on certain things, you do it in those particular branches. This helps you know the source of bugs. If you're working on two different features at the same time and you have a bug, you might not know which feature is the source, unless you keep them separated like this, then you'll know instantly. Finally, when you're done developing/testing and ready to have Your Version Of ML™ to work with in the field, just merge everything to working and build.

#5: Close unused branches
This wasn't exactly intuitive to figure out how to do in bitbucket so I'm simply sharing how you do it, so that you can keep your 'tree pruned' from branches you're no longer using:
Click on 'branches' in bitbucket.
For the branch you want to close hover your mouse near the right edge of the list, you'll see three dots.
Click and you get a pull down with an option to close the branch.


If you have any additional tips, please share them. I'll update this post with others' tips or new things as I learn them.
#15
Share Your Videos / HDR Sunset Timelapse
November 25, 2013, 04:06:31 AM


I shot this with an 1100D, which unfortunately doesn't have AutoETTR working, so I used Av mode in the camera, with max ISO 400. Each frame is a 2 shot bracket: -1 EV and +3 EV. I merged the frames with a photoshop script I wrote (find it here: http://www.magiclantern.fm/forum/index.php?topic=7682.0) into 32bit TIFF files.

Then I toned with ACR in Bridge and used another script I wrote to deflicker (here: http://www.magiclantern.fm/forum/index.php?topic=8850.0).

ACR Settings:
Highlights: -80
Shadows: +80
Clarity: +30
Saturation: +30

I was going to pull the TIFFs directly into AE, but AE didn't apply the ACR settings from Br. I realized that Br didn't create xmp sidecars, I guess it was storing ACR metadata directly in the TIFFs, so I just exported to JPEG from Br, then into AE, then dynamic link to Pr. Total processing time was probably 5-6 hours.

This is my alma mater, it's our first season to have a football team in more than 70 years. That's me on the keys playing our fight song.
#16
Share Your Photos / Comet ISON
November 20, 2013, 03:29:00 AM
A photo of comet ISON I got early this morning:



15 x 30s subs ISO1600, Canon 60Da, 6" (150mm) Newtonian @ f/5

Post in PixInsight and Photoshop, for this shot, I merged a stack that was comet aligned with a stack that was star aligned.

Despite the clouds, light pollution, and full moon, it turned out pretty well.
#17
This script is for deflickering and ramping pretty much any ACR setting from Adobe Bridge over a sequence of images

Website:
http://davidmilligan.github.io/BridgeRamp

Script download:
https://raw.github.com/davidmilligan/BridgeRamp/master/BridgeRampingScript.jsx

Place this script in:
Windows:
%APPDATA%\Adobe\Bridge CC\Startup Scripts\
Mac:
~/Library/Application Support/Adobe/Bridge CC/Startup Scripts/

to have it automatically load on startup (or you can just double click it or open it with Bridge when you want to use it). It will add three new items to the context menu 'Ramp...', 'Ramp Multiple...' and 'Deflicker...'

Simply select all the images you want to ramp, right click, go to 'Ramp...', 'Ramp Multiple...' or 'Deflicker...' and fill out the dialog.

'Ramp Multiple' ramps the existing ACR settings that you select from the first image to last (kind of like a smart version of the 'Synchronize' in the ACR dialog). You can also specify keyframes by rating them 1 star. It will ramp settings from one keyframe to the next. The first and last images are considered keyframes even if not marked. The deflicker will use keyframes marked like this as well.

This script is useful for timelapse as well as RAW video. There's an "additive" mode so you can add to the existing value (useful for modifying the exposure value calculated by ML Post Deflicker, easier than the exif tool way, and you can make it ramp as well)

There's also a version of this script for Lightroom:
Download:
https://github.com/davidmilligan/LrDeflick/releases/download/v0.1.0/

Code
https://github.com/davidmilligan/LrDeflick
#18


Canon 60Da and 1100D
Canon 10-22mm f/3.5-f/4.5
Rokinon 8mm Fisheye

Sunset and Sunrise sequences were Auto ETTR + Post Deflicker (with some manual deflickering in post, due to bad post deflicker settings on my part)

Also used adv_int.mo

Workflow:
Br/ACR > AE > Premiere
#19
Share Your Videos / Auto ETTR Sunrise Timelapse
October 09, 2013, 02:19:30 AM


60Da, Canon 10-22mm
Cleanup in Bridge/ACR, Edited in AfterEffects
Interval Period: 45s (15s longer than slowest shutter to give Post Deflicker plenty of time to work)
AutoETTR + Post Deflicker
Av: f/4
Slowest shutter: 30"
Max ISO: 1600
#20
WIP for replacing this module with a script -> http://www.magiclantern.fm/forum/index.php?topic=17570





Advanced Intervalometer (adv_int.mo)

This module is for advanced ramping and control of exposure parameters during an intervalometer sequence.With this module you specify the ramping by creating keyframes with the values of various parameters that you want to ramp to. When the intervalometer is running, this module ramps the specified parameter(s) from one keyframe to the next.

Update: The Advanced Intervalometer menu now shows up under the Intervalometer submenu.


I think some cool effects can be done by ramping aperture (and let AutoETTR compensate the expo with Tv, ISO) and focus or a combination of the two. You could have a scene where everything is in focus with f/22 and then ramp down to f/2 to blur out everything but your subject. The video below is my first quick and dirty test of this module and does just that:

I ramped the Av from min to max and back to min, and let AutoETTR+Post Deflicker take care of the exposure.

The other potential use for Av ramping is a full day to full night (or vice versa) time-lapse, where you might want a pretty small Av during the day, and then open it all the way wide open and move to focus infinity to get the faint light of stars/milky way when it gets dark. So basically you are making the "artistic" exposure choices, with your knowledge of time of sunset/twilight, while AutoETTR takes care of then optimizing the exposure.

Here's some examples where I used Av ramping with AutoETTR:




Tv ramping could be very useful for doing some varying amounts of motion blur. Think car lights at night: by varying the shutter angle, you vary the length of the streaks of car lights. I've seen this done in timelapses, and the effect is very cool.

Changing the actual intervalometer period allows you to create accelerating or decelerating timelapses with out having to waste shutter actuations (you can easily accelerate or decelerate in post, but you will in essence be throwing out frames). Another thing you could do is to actually slow down your footage in post by the same amount you ramp the intervalometer period. This would give the effect of the time-lapse not actually chaining speed, but it would become more or less "choppy".

To use, download the latest nightly and add this module to the ML/modules folder:
https://bitbucket.org/dmilligan/magic-lantern/downloads/adv_int.mo

See the readme for instructions on how to use.

Current list of settings that can be ramped

  • shutter
  • aperture
  • ISO
  • focus
  • interval time
  • bulb duration
  • white balance

Updates:
[2014-2-25] Updated to work with new menu features (mult-level submenus and menu_caret)
[2014-3-8] Updated to work with module version 5, Added ability to adjust Tv, Av, ISO from within adv_int menu, so you don't have to leave the menu and come back



Disclaimer:
I'm not responsible for anything that might happen to your camera from running my code. Use at your own risk.
#21
Share Your Videos / Boston Sunset Timelapse
September 15, 2013, 08:45:45 PM


Shot with Auto ETTR and Post Deflicker. Interval time 30", max shutter 20" (to give post deflicker time to work). This is the first time I've had any luck with using auto ETTR with the intervalometer (I'm much more in favor of a carefully planned manual expo ramp, which currently isn't really possible with ML), and I still had problems. Auto ETTR does not adjust aperture (correct me if I'm wrong), which really needs adjusting when going from full daylight to full darkness. I tried manually adjusting it myself in between shots but it caused ML to crash and I had to quickly restart the camera, to keep the sequence going, fortunately you can't really tell where this happened. All that being said, I am happy with the results, though I'm not sure they would translate to a more difficult scene or more drastic transition.

(Also, the second, dimmer moon is actually just a reflection on the window I was shooting out of)
#22
I've setup an xcode project for compiling and editing ML on a Mac. I'll have to say it's pretty dang spiffy. The Xcode editor is very nice, syntax aware, auto-indent, auto-complete and 'Jump to Definition' works very well (Jump to Definition is invaluable for newbies to the ML code base like myself). You can search in project files for symbols, regex, etc. When building, Xcode automatically jumps to and highlights code errors and warnings. It's all pretty sweet.

I thought I'd share the project file in case anyone else would like to use it:

https://bitbucket.org/dmilligan/magic-lantern/downloads/magic-lantern-xcode.xcodeproj.zip

Download the project file to the root of your ML source, which needs to be: ~/magic-lantern
You will need to change PWD build setting if you want to locate it somewhere else. Also depending on your setup you may need to modify PATH. The PATH I setup assumes you used brew to install various things ML needs to compile.

There's a target called 'do-not-build' this is to get all of the autocomplete stuff to work, if you create new source files or modules, make sure to add them to this target so that auto-complete and such will work with them.

TODO:
Get the emulator setup and be able to run it automatically from Xcode with the debugger.

EDIT: zipped the project file b/c apparently it wasn't working
#23
Think of it like this: extending Image Review into the time even after another exposure has started. This would be incredibly useful for astrophotography and night time lapses. Typically you are using the intervalometer and taking very long exposures 30s + with as little time between shots as possible (because clear dark sky time is very precious and rare, at least for me, and I want to spend absolutely every second of it I can collecting photons).  This basically means you get no or very very little time to review images after they are being taken before the screen blacks out and starts exposing again. There are all sorts of things that can go wrong while shooting astro (for example: dew, tracking error, focus) and it's very important to be able to keep an eye on what is going on.

It is possible to write to the screen during a bulb exposure, I have tried it (with text). First thing is disabling this very annoying line of code in shoot.c:

// for 550D and other cameras that may keep the display on during bulb exposures -> always turn it off
if (DISPLAY_IS_ON && s==1) fake_simple_button(BGMT_INFO);


BTW as a side note, I always have this line commented out for my own personal builds b/c I would much rather see the bulb time indication on the screen than save a little battery by turning the screen off. Batteries are cheap and sometimes I'm use an AC adapter anyway. When you're taking 5 to 10+ minute exposures, it's really nice to know how much time you have left.

So it seems very feasible to draw the most recently taken image to the screen during this time. I'm trying to figure out how to do this myself, but I'm not very familiar with the way the vram works and could use a few pointers (haha, pun intended). I've looked at the code for ghost image, and I see that it is basically going into play mode, capturing the vram and saving it to a file. Then it somehow writes that file over the lv display. I'm guessing that I could pretty much do something like that here: during the brief image review time in between shots, capture the vram buffer, then after the next shot starts exposing, restore it? I'm assuming I wouldn't really need to write to a file either, just copy it to somewhere in dynamic memory and then copy it back saving a lot of IO time.

I'm also a little nervous about just trying stuff and writing into these various buffers, which seems like it could be dangerous, esp. since I don't really know what they do or how they work.
#24
So for a long time it has annoyed the crap out of me that there is no way to automate the HDR toning process in photoshop (for the purpose of timelapse, etc.). Searching the internet, pretty much all you find is people telling you to use some other software to do it like photomatix. Well I don't freakin' want to buy another piece of software when I've already dropped a boat load on photoshop and it should, at least in theory, be able to do this.

After scraping bits of scripts together I found from across the internet and single stepping through the MergeToHDR.jsx script built into photoshop to figure out how it works, I finally have come up with a script that can do this.

https://github.com/davidmilligan/PhotoshopBatchHDR/blob/master/Batch%20HDR.jsx

Guide on how to use this script:

https://github.com/davidmilligan/PhotoshopBatchHDR/wiki

Being a C++/C# developer by day, non-strongly-typed languages like javascript annoy the hell out me, and the poor documentation and crappy IDE of ExtendScript only make matters worse. So if anyone with more javascript/adobe scripting experience would like to help me polish this script up, that would be great.

Modify the 'numberOfBrackets' variable to tell the script how many brackets are in each shot, the MergeToHDR script will automatically determine which is which (+EV, - EV, etc) so the bracket order doesn't matter just as long as you always take the same number of brackets for each shot.

The script is currently not able load individual raw files for HDR toning (i.e. numberOfBrackets = 1) which is something I'd like to be able to do since even a single exposure raw file is 14bit which is higher dynamic range that an 8 bit screen, and allow HDR time lapses without as many actuations. I don't really know how to raw files; you have to specify a bunch of options when you try to open a raw file, and I don't really know what they should be. The MergeToHDR script will load raw files but it requires a minimum of 2 files so it wont work with individual files, and I can't really figure out what it's doing to load raw files (or at least I haven't tried really hard to figure it out yet).

Once I figure out how to load single RAW files I think this script could also be very useful for those doing RAW video, as it would provide a way to do HDR toning effects on RAW video footage.

I'd also like to figure out a way to avoid hard coding the toning settings in the script, perhaps reverse engineer the HDR preset file format and have the script load a preset file and use the values in that.

Also, I'm using photoshop CC, so I'm not sure if it works on earlier versions, but it should, as I don't think adobe has changed any of the HDR stuff in a while.
#25
#26
First off, kudos to you guys for making such a wonderful piece of software. I've been using ML for about 3 months now and I don't know what I would do without it. It makes my cameras 10x more useful.

As an astrophotographer who shoots a lot in very low light and often with manual focus only optics (such as a telescope). I would find it very helpful if ML could somehow assist me in achieving focus. Focusing with a telescope is a long, arduous, trial and error process. I think it could be sped up and accuracy improved with the help of ML. Here's basically what I do to get focus: take an exposure with the shortest duration that I can and still get enough image to judge focus (usually at least several seconds, if I'm photographing something really faint could be as long as 30s, also with the 2 sec timer and mirror lockup so I don't get any shaking). Then move the focus a little and take another picture and so on and so forth until I feel like I can't get it any better.

So here's my request: a manual focus assistant. I takes an image and analyzes focus and then says to you "move the focus" then it takes another, compares to the previous and says either "keep going", "no that's worse go the other direction", or "that's as good as your going to get".

I think this is simple enough it could almost be accomplished with a picoc script. I looked at the API a while back, but I don't really remember if there were sufficient focus checking functions to accomplish this. The other issue would be that analyzing focus in an image that is mostly a star field is a bit different from a "normal" subject (though theoretically much easier since you just have lots of little point sources of light you can check)

The second part of this would be an automatic mode for lenses that are AF but there is not enough light available for AF. It would basically do the same thing except fully automatically, then it would lock the AF, or just tell the user to switch the lens to MF so the camera doesn't try to AF on the next shot and screw it up.

Thanks!
David