Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - jordancolburn

#51
NM, just downloaded a newer build and it already functions the way I wanted.  Thanks ML team for putting in features before I even know I want them!  1st raw, now mindreading.  Big week.
#52
This might not be relevant, since I've only tested on the 600d, but would it be possible to just overlay a white framed box before recording for framing?  That plus popping into the 10x zoom would be enough to get focus and framing.  (The build I had only seemed to add the white frame box after recording was started unless I missed a feature?).
#53
Hardware and Accessories / Re: Old Russian Lenses
May 15, 2013, 06:46:24 PM
I have an m42 to EOS mount.  For video, just get the cheapest one without any focus confirm sensor.  I've also had luck adapting older pentax mount lenses too.  The only downside to the cheap adapters has been a little rotational "play" after attaching the lens to the camera, but it hasn't seemed to effect video quality and I'm sure a slightly nice adapter would eliminate the issue.
#54
Quote from: 1% on May 14, 2013, 06:29:43 PM
Sraw and then frames like 1280x480 or something like that... a few different ones worked. 960x480 is pretty much continuous. Also recorded in 640x480.. I get a few more MB writing this way up to 18... so shrinking YUV edmacs will have effect... if we figure out how to disable HD buffer and face detection or make it output 1 line only I think there will be noticeable performance improvement.
I believe I turned the experimental sraw function on, but still recieved about the same results.  My class 10 card isn't very fast (15-18mb in the benchmark) but I still get the partial pink frames on the low 720x320 resolution.  It appears to happen slightly less frequently on the lower resolution options, but there is still at least 1 frame of pink junk per second even with continual shooting and no frame drops.  Is my card just that much too slow, or could I be missing another important setting?
#55
Quote from: 1% on May 14, 2013, 04:11:18 PM
That won't work for 550D, its 600D only. I'm  not getting pink frames anymore most of the time.. sometimes jacked up frames. I haven't tried 1738 or 1740 again which were known bad.
What settings are you using on the 600d?  I turned global draw off and went really conservative on the frame size and get constant pink frames every 5-10 frames with the latest binaries from your tragic lantern 2.0 repo.  Btw, thanks for the builds!
#56
Quote from: KahL on May 14, 2013, 02:41:21 PM
What's going on w/ the 600D? Is it available to test and where?
The forum user 1% has put together a compiled build for the 600d here (be warned that the SD card limits you to smaller than HD resolution):
https://bitbucket.org/OtherOnePercent/tragic-lantern-2.0/downloads

I had luck installing and using it, but no matter what frame sizes I specified, I got those pink weird cropped frames every 5-10 frames (Using a transcend class 10 card, which others said they were able to use on small resolutions without issue).  I'm extremely excited about the RAW possibility, the files just have so much lattitude!  Any tips on how to get pink frame-free video on a 600d from someone who has tried it?
#57
Shoot Preparation / Re: Audio, ML, and T3i
March 01, 2013, 09:51:01 PM
The solution I've found is to use a zoom h4 (the old model, not the h4n) as the main mic mounted  onto the camera.  I monitor audio using the headphone out of the h4 and run the line out of the h4 into the t3i with the mic input gain set a notch or two above 0.  The h4 is a decent stereo mic as is and I'm able to use the xlr ins for 2 channels of lav in a sitdown interview.  Because of the line-in, no sync is needed, but if you hit record on the zoom for important interviews, then you always have a backup in case something clips or drops out.  You can also do this with the h4n or any audio recorder that doesn't have a line out by using that sescom monitoring cable or by making your own similar cable.
#58
Quote from: nedyken on December 14, 2012, 11:50:09 PM
But I think I'm leaning towards what Francis says here.  I'm a one man production team.  Anything I work on will be shot by me and edited by me.  Maybe it's a terrible habit to get use to this process, but I think what I'd probably do is just set to a flat style and expose so that I wasn't losing anything in my whites.   At least personally, I don't mind not seeing the "final result" on the LCD. 
I think the goal is not to see the "look", but to set the exposure and focus accurately, because the flat style will not give you as much information on the screen because everything is squished in the middle to preserve more info for editing.  ML makes this easy, just set standard/vivid/whatever as the LiveView picture style and any flat style as the Rec picture style and ML automatically switches just when you're recording.  Very handy!

Quote
does that mean I should add the "Sharpen" effect in Final Cut with an amount of 4.0?  Would that bring back the amount of sharpening I lost or is it a different type of scale?
Different units, Sharpen until it looks good, but be sure not to overdo it.  It does make a huge difference, and my general rule is to dial it in until it looks good and I can see the effect, then back off slightly.
#59

#1 - The LUT provides a convenient easy way to map the flat recorded data and approximate the final "look".  In big productions it might be applied between the camera output and a field monitor to give a director a better sense of the shot and avoid the "flat grey" look.  As stated above, the way to do this in ML doesn't involve a LUT, but setting the liveview picture style to something approximating the final look and the record style flat.  This method allows you to set focus and exposure properly, then record flatter to give a little more lattitude in post.

#2 -  This article, http://www.hurlbutvisuals.com/blog/2010/12/in-praise-of-dissent-adobe-cs5-paves-the-way/, seems to suggest that in Premiere at least, the internals of premiere are processing the DSLR footage with 4:2:2, so there is no need to convert.  Final Cut may vary.

#3 -  It is better to underexpose slightly because once something is 100% white, it is gone.  A good way to avoid this is by setting zebras.

this article, http://www.hurlbutvisuals.com/blog/2012/01/7-tips-for-hd-color-correction-and-dslr-color-correction/ seems to have the best workflow for correcting flat recorded DSLR footage. 
Don't forget to sharpen in post too.  The flat styles usually turn down the in camera sharpening so you can adjust it to taste in post rather than having it baked in.  That's what all of this mainly comes down to.  For quick easy family moment recording, use standard and save yourself the headache.  For artistic use where you want more control and have the time, use a flat style.  I prefer the prolost settings rather than a cinestyle because it's very easy to set quickly on any camera.


I'm pretty much new too, just thought I'd share some things I picked up from searching around.  Hope it helps!

#60
Love the idea for this converter.  I'm working with a slightly older machine right now, and while I considered using proxies before, the hassle of manually keeping track turned me off of it.  This should really help speed up the editing process.  Thanks!  I'll give it a try and let you know how it goes.
#61
I did some searching and the only references I could find were a one sentence description on the wiki and some references in bug reports, so I really hope I'm not duplicating anything.

I've been using the Rec Picstyle function to set exposure and focus using the canon standard style, but record in the prolast flat style for a little more color/sharpness leeway in post.  The issue is, every time you hit record, a black bar of text shows up telling you the picstyle changed causing part of the image to be obscured on live view for the first bit of each take.  Would it be possible to have a "q option" to turn this notification off? 
#62
Main Builds / Re: 600D Audio TEST release - 2.3 based
October 23, 2012, 08:45:32 PM
Quote from: 1% on September 28, 2012, 07:52:58 PM
Yea, its gone.

Until next ML release.

https://bitbucket.org/OtherOnePercent/tragic-lantern/downloads

But need to have card formatted to exfat.
So If I wanted to test out the audio features for 600d, I could:
1) Download your tragic-lantern build onto an exfat card (any side effects of exfat?)
2) Is it possible to try one of the nightly builds from a1ex since the source got merged into his repository
3) Wait until the next official release (which should be coming....when? not trying to ask for a date or anything, I know that's ridiculous, I'm more curious about a general timeline, like before or after the new year)

Thanks again for all the hard work on the 600d audio side of things.
#63
Quote from: nanomad on October 06, 2012, 08:51:23 PM
Good luck with getting an hardware vendor to sell this kind of stuff to you. Just think of the number of patents and agreements involved in a "simple" image sensor design  ::)
I'd say its more of a quantity issue.  You're basically not worth dealing with for the chip manufacturers unless you're talking quantities in the hundreds of thousands or more.
#64
Hardware and Accessories / Re: Tripod suggestions?
October 22, 2012, 07:08:09 PM
We recently bought the pearstone VT2100 fluid head tripod from BH for $70.  It's really good, especially for how cheap it is, very fluid motion.  The only real downside is that the head isn't removable so upgrade options are limited.
#65
http://jordancolburn.com/2012/10/22/751/

Since installing ML on my t3i, I've been very interested in starting to learn to create timelapses.  I'm specifically impressed with timelapses with ken burns effect styled motion like this one from Philip Bloom: http://philipbloom.net/film/24-hours-of-neon/

The only downside is, I didn't have all the software tools to make it work.  To solve this, I wrote my own python script to automate the timelapse process and provide a GUI to select unlimited keyframes for ken burns style motion.  My first test can be seen below.



The script,  builds upon the RAW deflickr script written by a1ex here, http://www.magiclantern.fm/forum/index.php?topic=2553.0

Current Dependencies:
(can all be installed on most versions of GNU/Linux with apt-get)
*Timelapse ffmpeg*
python
yasm
x264
imagemagick
ffmpeg

*Deflicker*
numpy
scipy
matplotlib
dcraw
ufraw
imagemagick

*GUI*
PIL

to start it use the command line format:
python timelapse.py -i raw -o jpg -r

Where timelapse.py is the script, raw is the input folder containing canon raw files, and jpg is the destination folder for converted jpg and movie. The -r switch designates that the files in raw will be developed, leave this out if you already have converted files in the jpg directory. The script is a little rough around the edges, but it works well enough for me. Comment if you have any improvements or questions.


from __future__ import division
import os
import glob
import sys, re, time, datetime, subprocess, shlex, getopt, fnmatch
from math import *
from pylab import *
import Tkinter
import ttk
import tkMessageBox
from PIL import Image, ImageTk

# RAW deflickering script
# Copyright (2012) a1ex. License: GPL.
def progress(x, interval=1):
global _progress_first_time, _progress_last_time, _progress_message, _progress_interval

try:
p = float(x)
init = False
except:
init = True

if init:
_progress_message = x
_progress_last_time = time.time()
_progress_first_time = time.time()
_progress_interval = interval
elif x:
if time.time() - _progress_last_time > _progress_interval:
print >> sys.stderr, "%s [%d%% done, ETA %s]..." % (_progress_message, int(100*p), datetime.timedelta(seconds = round((1-p)/p*(time.time()-_progress_first_time))))
_progress_last_time = time.time()

def change_ext(file, newext):
if newext and (not newext.startswith(".")):
newext = "." + newext
return os.path.splitext(file)[0] + newext

def get_median(file):
cmd1 = "dcraw -c -D -4 -o 0 '%s'" % file
cmd2 = "convert - -type Grayscale -scale 500x500 -format %c histogram:info:-"
#~ print cmd1, "|", cmd2
p1 = subprocess.Popen(shlex.split(cmd1), stdout=subprocess.PIPE)
p2 = subprocess.Popen(shlex.split(cmd2), stdin=p1.stdout, stdout=subprocess.PIPE)
lines = p2.communicate()[0].split("\n")
X = []
for l in lines[1:]:
p1 = l.find("(")
if p1 > 0:
p2 = l.find(",", p1)
level = int(l[p1+1:p2])
count = int(l[:p1-2])
X += [level]*count
m = median(X)
return m

def deflickerRAW(inputfolder, outputfolder):
ion()

progress("Analyzing RAW exposures...");
files = sorted(os.listdir(inputfolder))
i = 0;
M = [];
for k,f in enumerate(files):
m = get_median(os.path.join(inputfolder, f))
M.append(m);

E = [-log2(m/M[0]) for m in M]
E = detrend(array(E))
cla(); stem(range(1,len(E)+1), E);
xlabel('Image number')
ylabel('Exposure correction (EV)')
title(f)
draw();
progress(k / len(files))

if not os.path.exists(outputfolder):
os.makedirs(outputfolder)

progress("Developing JPG images...");
i = 0;
for k,f in enumerate(files):
ec = 2 + E[k];
cmd = "ufraw-batch --out-type=jpg --overwrite --clip=film --saturation=2 --exposure=%s '%s' --output='%s/%s'" % (ec, os.path.join(inputfolder, f),outputfolder, change_ext(f, ".jpg"))
os.system(cmd)
progress(k / len(files))

#declare variables############
moving=False
resize=False
aspectx=16
aspecty=9
rectcenterx=0
rectcentery=0
rectsizex=0
rectsizey=0
imagesizexpre=0
imagesizeypre=0
imagesizex=0
imagesizey=0

#Events#########################
#triggers on left click in canvas
def xy(event):
global rectcenterx, rectcentery, rectsizex, rectsizey, moving, resize

#if moving or resize:
moving=False
resize=False
#detect rectangle center grab for move
if event.x>(rectcenterx-int(rectsizex/2)) and event.x<(rectcenterx+int(rectsizex/2)) and event.y<(int(rectcentery+rectsizey/2)) and event.y>(int(rectcentery-rectsizey/2)):
moving=True
#detect lower right rectangle corner grabs for resize
if event.x>(rectcenterx+rectsizex-rectsizex/4) and event.x<(rectcenterx+rectsizex+rectsizex/4) and event.y<(rectcentery+rectsizey+rectsizey/4) and event.y>(rectcentery+rectsizey-rectsizey/4):
resize=True

#triggers on motion in canvas
def canvasmotion(event, canvas, rectangle):
global rectcenterx,rectcentery,rectsizex,rectsizey,moving,imagesizex,imagesizey,resize,aspectx,aspecty,checkboxstate
if checkboxstate.get():
if moving:
if ((event.x+rectsizex0)):
rectcenterx=event.x
if ((event.y+rectsizey0)):
rectcentery=event.y
if resize:
if (event.x=0) and (rectcentery-((event.x-rectcenterx)*aspecty/aspectx))>0 and int((event.x-rectcenterx)*2*imagesizexpre/imagesizex)>=1920:
rectsizex=event.x-rectcenterx
rectsizey=(rectsizex*aspecty)/aspectx
drawRect(canvas,rectangle)

def drawRect(canvas, rectangle):
global rectcenterx,rectcentery,rectsizex,rectsizey,moving,imagesizex,imagesizey,resize,aspectx,aspecty,keyframes,currentframe
canvas.tag_raise(rectangle)
canvas.coords(rectangle, rectcenterx-rectsizex, rectcentery-rectsizey, rectcenterx+rectsizex, rectcentery+rectsizey)
keyframes[currentframe]=[rectsizex,rectsizey,rectcenterx,rectcentery]

def changeFrame(FrameNumSpin, outputfolder, canvas, canvasimage, rectangle,c1):
global rectcenterx,rectcentery,rectsizex,rectsizey,photo,currentframe,checkboxstate
currentframe = int(FrameNumSpin.get())
files = sorted(glob.glob(outputfolder + "/IMG_*.jpg"))
image = Image.open(files[currentframe-1])
image.thumbnail((350, 350), Image.ANTIALIAS)
photo = ImageTk.PhotoImage(image)
canvas.delete(canvasimage)
canvasimage = canvas.create_image(0,0, image=photo, anchor=Tkinter.NW)
c1.deselect()
for keyframe in keyframes:
if keyframe==currentframe:
c1.select()
rectsizex=keyframes[currentframe][0]
rectsizey=keyframes[currentframe][1]
rectcenterx=keyframes[currentframe][2]
rectcentery=keyframes[currentframe][3]
drawRect(canvas, rectangle)
break

def checkboxClicked(canvas,rectangle,c1):
global rectcenterx,rectcentery,rectsizex,rectsizey,moving,imagesizex,imagesizey,resize,aspectx,aspecty,imagesizex,imagesizey,imagesizexpre,imagesizeypre, photo, keyframes,currentframe,checkboxstate
if currentframe == 1:
c1.select()
else:
if checkboxstate.get():
rectsizex=imagesizex/2
rectsizey=(rectsizex*9)/16
rectcenterx=imagesizex/2
rectcentery=imagesizey/2
#keyframes[currentframe]=[rectsizex,rectsizey,rectcenterx,rectcentery]
drawRect(canvas,rectangle)
else:
del keyframes[currentframe]
print "deleted keyframe"
canvas.tag_lower(rectangle)
'''
#graceful exit
def ask_quit(root):
if tkMessageBox.askokcancel("Quit", "Do you want to quit now?"):
root.destroy()
'''

def initGUI(inputfolder, outputfolder):
#
global rectcenterx,rectcentery,rectsizex,rectsizey,moving,imagesizex,imagesizey,resize,aspectx,aspecty,imagesizex,imagesizey,imagesizexpre,imagesizeypre, photo, keyframes,currentframe,checkboxstate
#Create User Interface
#
root = Tkinter.Tk()
#root.columnconfigure(0, weight=1)
#root.rowconfigure(0, weight=1)

canvas = Tkinter.Canvas(root)
files = sorted(glob.glob(outputfolder + "/IMG_*.jpg"))
#add image to canvas
image = Image.open(files[0])
imagesizexpre = image.size[0]
imagesizeypre = image.size[1]
image.thumbnail((350, 350), Image.ANTIALIAS)
imagesizex = image.size[0]
imagesizey = image.size[1]

#set initial rect size
rectsizex=imagesizex/2
rectsizey=(rectsizex*9)/16
rectcenterx=imagesizex/2
rectcentery=imagesizey/2
currentframe=1
keyframes={1:[rectsizex,rectsizey,rectcenterx,rectcentery]}

photo = ImageTk.PhotoImage(image)
canvasimage = canvas.create_image(0,0, image=photo, anchor=Tkinter.NW)
rectangle=canvas.create_rectangle(rectcenterx-rectsizex, rectcentery-rectsizey, rectcenterx+rectsizex, rectcentery+rectsizey, outline="#fb0")
#generate header button row
HeaderRow = Tkinter.Frame(root)
b1 = Tkinter.Button(HeaderRow, text="One")
b2 = Tkinter.Button(HeaderRow, text="Two")
checkboxstate=Tkinter.IntVar()
c1 = Tkinter.Checkbutton(HeaderRow, text="Keyframe", variable=checkboxstate, command=lambda: checkboxClicked(canvas,rectangle,c1))
headerLabel1 = Tkinter.Label(HeaderRow, text="frame #")
headerLabel2 = Tkinter.Label(HeaderRow, text="of %d" % len(files))
FrameNumSpin = Tkinter.Spinbox(HeaderRow, from_=1, to_=len(files), command=lambda: changeFrame(FrameNumSpin, outputfolder,canvas,canvasimage,rectangle,c1))
b1.pack(side = Tkinter.LEFT)
b2.pack(side = Tkinter.LEFT)
c1.pack(side = Tkinter.LEFT)
headerLabel1.pack(side = Tkinter.LEFT)
FrameNumSpin.pack(side = Tkinter.LEFT)
headerLabel2.pack(side = Tkinter.LEFT)
HeaderRow.grid(column=0, row=0)
c1.select()
#create canvas
canvas.grid(column=0, row=1, sticky=(Tkinter.N, Tkinter.W, Tkinter.E, Tkinter.S))
canvas.bind("", xy)
canvas.bind("", lambda event: canvasmotion(event, canvas, rectangle))
#generate foot button row
FooterRow = Tkinter.Frame(root)
footerLabel = Tkinter.Label(FooterRow, text="Useful help tips")
b3 = Tkinter.Button(FooterRow, text="Three")
b4 = Tkinter.Button(FooterRow,text="Process!", command=lambda: processTimelapse(imagesizex,imagesizey,imagesizexpre,imagesizeypre,inputfolder,outputfolder))
footerLabel.pack(side = Tkinter.LEFT)
b3.pack(side = Tkinter.LEFT)
b4.pack(side = Tkinter.LEFT)
FooterRow.grid(column=0, row=3)

#root.protocol("WM_DELETE_WINDOW", ask_quit())
root.title ("Timelapse")
#w,h = root.winfo_screenwidth(), root.winfo_screenheight()
#root.geometry("%dx%d+0+0" % (w,h))
root.mainloop()

def processTimelapse(imagesizex,imagesizey,imagesizexpre,imagesizeypre,inputfolder,outputfolder):
global rectcenterx,rectcentery,rectsizex,rectsizey,aspectx,aspecty,keyframes
#renumberjpeg(inputfolder,outputfolder)
i=0
files = sorted(glob.glob(outputfolder + "/IMG_*.jpg"))
if not os.path.exists(outputfolder+"/resized"):
os.makedirs(outputfolder+"/resized")
for onefile in files:
if fnmatch.fnmatch(onefile, '*.jpg'):
print "cropping %s" % onefile
filename=onefile
image = Image.open(filename)
for keyframe in sorted(keyframes):
if keyframe==i+1:
print "equals"
rectsizex=keyframes[i+1][0]
rectsizey=keyframes[i+1][1]
rectcenterx=keyframes[i+1][2]
rectcentery=keyframes[i+1][3]
interpolatelatch=0
rectsizexslice=0
rectsizeyslice=0
rectcenterxslice=0
rectcenteryslice=0
break
elif keyframe>(i+1):
print "greaterthan"
if interpolatelatch==0:
rectsizexslice=(keyframes[keyframe][0]-rectsizex)/(keyframe-(i))
rectsizeyslice=(keyframes[keyframe][1]-rectsizey)/(keyframe-(i))
rectcenterxslice=(keyframes[keyframe][2]-rectcenterx)/(keyframe-(i))
rectcenteryslice=(keyframes[keyframe][3]-rectcentery)/(keyframe-(i))
interpolatelatch=1
rectsizex=rectsizex+rectsizexslice
rectsizey=rectsizey+rectsizeyslice
rectcenterx=rectcenterx+rectcenterxslice
rectcentery=rectcentery+rectcenteryslice
break
box = (int((rectcenterx-rectsizex)*imagesizexpre/imagesizex), int((rectcentery-rectsizey)*imagesizeypre/imagesizey), int((rectcenterx+rectsizex)*imagesizexpre/imagesizex), int((rectcentery+rectsizey)*imagesizeypre/imagesizey))
print box
area = image.crop(box)
area = area.resize((1920, 1080), Image.ANTIALIAS)
area.save(outputfolder+"/resized/%03d.jpg" % i, 'jpeg')
i=i+1
cmd = "avconv -i %s/resized/" % outputfolder
cmd= cmd + "%" + "03d.jpg -r 24 -s hd1080 -vcodec libx264 -crf 16 %s/timelapse.mp4" % outputfolder
os.system(cmd)

def main(argv):
# print command line arguments
inputfolder = ''
outputfolder = ''
process = 0
try:
opts, args = getopt.getopt(argv,"hi:o:r",["ifolder=","ofolder="])
except getopt.GetoptError:
print 'timelapse.py -i -o -r '
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print 'timelapse.py -i -o -r '
sys.exit()
elif opt in ("-i", "--ifolder"):
inputfolder = arg
elif opt in ("-o", "--ofolder"):
outputfolder = arg
elif opt == "-r":
process = 1

print 'Input folder is "', inputfolder
print 'Output folder is "', outputfolder
if process==1:
deflickerRAW(inputfolder, outputfolder)
initGUI(inputfolder, outputfolder)

if __name__ == "__main__":
main(sys.argv[1:])

#66
I was just reading this thread last week and lo and behold, yesterday I found an old pentax k1000 camera at an estate sale with 3 lenses for $25.  A 50mm 2.0 asahi pentax, tokina 80-200mm f4 push pull zoom (with a fixed focal length), and most importantly, a focal 28mm f2.8 that will let me get the wider side of things that I've been missing with the standard canon 50mm 1.8 on my cropped t3i.

I ordered the adapters yesterday and can't wait to try the lenses out for video.  If you check craigslist regularly, you should be able to find people selling old k1000s for $20-$50, with little or no info on the lens, but they usually come with a small old kit that someone has build up and present an excellent way to start a decent lens kit for almost no money.
#67
I was looking on the forum today and began to notice a theme.  The type of people who use ML are the type of people who are very creative and create a ton of little scripts to quickly and easily take full advantage of all the great photo and video processing features available in ML.  This windows utility is a perfect example:
http://www.magiclantern.fm/forum/index.php?topic=1097.0

Myself and few other people commented to say they were developing similar programs.  I think it would be a really good Idea to have a combined effort to create a GPL multi OS utility to process common ML functions such as HDR video, timelapse and others.  It could have the batch processing built-in and enable plugin like support for new ML features.

If you would be interested in using or developing something like this, please comment here to help brainstorm.

Malcom's utility bove uses C#, but most of the c# code should be easily portable into something multiplatform like python while still using the same libraries.  I'm an electrical engineer, but currently just start working in software development.  I have some experience, but packaging a desktop app for delivery on multiple platforms will be a learning experience to say the least.  Hope this idea is interesting to a few others of you out there as well.
#68
Cool, well, I didn't mean to completely derail your original thread because the current app really useful! so should we start a separate thread to gauge potential interest and talk about how best to get started?
#69
Quote from: Malcolm Debono on September 13, 2012, 06:36:03 PM
Such a tool would definitely be useful! Unfortunately I'm not quite familiar with any cross-platform language (except for Java), which is why I started this app (which was very basic at first, and was only intended as a simple personal project!) in C#. If it can be useful, I'd be more than willing to share any code from this app if anyone wants to work on such a tool :)
That would be great! I'm an electrical engineer by degree but working as a newbie software developer so I'm still learning a lot. 

AFAIK Python tends to be pretty portable (AKA, I can run my scripts on any linux/mac/win computer with all the library dependencies), I'm just not sure how to go about including all dependencies and packing it up really nicely for the different platforms.

It seems like most of the real complicated functions in your utility are calling other programs and the c# just provides the GUI and setting up batches/file operations, right?
#70
This is awesome!  I'm currently working on a similar utility in python to provide a nice front end to some of the timelapse scripts I've seen around here for deflickr and  also to add a GUI to set keyframes for some simple ken burns style motion during timelapses.  It too uses ffmpeg, imagemagick and other tools that should be very portable.

I think It would be extremely beneficial to develop a GPL'd magic lantern companion software that would support multiple OS, and allow users to simply log footage, create timelapses, simple HDR video and provide extensisibility to take advantage of new ML features as they come out.  I'd be more than willing to help support a project like that in any way.
#71
Main Builds / Re: 600D Audio TEST release - 2.3 based
September 12, 2012, 01:48:26 AM
I first installed ML on my t3i to get the audio features and was pretty disappointed to learn that they didn't work.  There were enough amazing features to distract me for a while, but when I started looking again I found your original thread looking into getting the audio working on t3i.  After reading all 20 pages to here or so, I'm really excited to get testing later tonight and help out.  Thanks for all your work!

also, 1%, I saw you were looking into the battery powered Nady preamp, I've had really good luck with the really cheap Behringer mixers.  With condenser mics or electret lavs, you don't need to set the preamps on the camera more than one notch above 0 and you don't even push the Behringer pres that hard so the audio quality is very good despite the cheap mixer (Although you are stuck to being near an outlet, mostly for an interview type scenario).
#72
I gave it a go the other night and got some very nice looking video (A little dark, but it might have been more my source footage).  I plan on playing around with this a lot more, as well as scripting for other video tasks such as creating timelapses.
#73
Thanks for the awesome script, I'll have to give it a try tonight. 

How important is the image alignment?  A few of the other workflow seem to do a simple split and then HDR between the frames without alignment.  If the motion during the time between frames (1/60 s) isn't too much, I wonder if you could save some processing time and make the script more useful for longer clips.

Thanks again!
#74
HDR and Dual ISO Postprocessing / Re: Lightworks and HDR
September 06, 2012, 06:27:32 PM
Quote from: b4rt on June 29, 2012, 10:21:30 AM
Thanks a lot for this workflow. One question though. The hdr blend looks more of an exposure average than something achieved through tone mapping. I would like to see the whole sky of the dark exposure in the end result. Because it has a nice atmosphere. I wonder if you just shoot non HDR at the exposure in between both hdr exposures and use something like flaat10 you might achieve the same result with 25 real frames. But I like to try the different settings to see what it's capable of.

That is my question as well.  This seems similar to the blender VSE workflow, but with a faster render time.  Lightworks has a pretty flexible node based effects chain, so it might be easy to add in a tone mapping plugin or something, not sure that it would be very easy to create a tone mapping algorithm fast enough to keep up with real time video though.
#75
Gotcha, I'll check out your tutorial on youtube too.