Full-resolution silent pictures (silent.mo)

Started by a1ex, July 01, 2014, 05:11:15 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

aleks

I have tried the approach suggested by a1ex, built the raw video branch, and the silent functionality works (the image is distorted and with black borders, probably because of the pix offsets being wrong for the 1100D). I could not build the branch referred to by dmilligan (probably checked out the wrong commit).

Anyway, with what I have right now I can't really do what I intended to. Namely, the exposure duration of the silent image is always the same, I can't seem to take a silent picture with a longer exposure (say 5 seconds). a1ex mentioned that longer exposures should be possible (at the first post of this thread), but that may apply only to full res silent pics. I activated LV, changed the exposure setting back and forth, but the silent pic was always recorded immediately, there was no perceptible "accumulation" time before the save message even at exposure settings of 5-10 seconds. I have tried both .raw and. jpg formats.

For the "autoguide mode" to be useful, I would need to have a silent pic with exposure time of say 10 seconds (the longer the better) while the LV shows uninterrupted video on the screen. Not sure if this is possible (was not mentioned whether the display is refreshed during a long exposure silent image).

dmilligan

Quote from: aleks on September 09, 2014, 12:52:12 AM
Namely, the exposure duration of the silent image is always the same, I can't seem to take a silent picture with a longer exposure (say 5 seconds).
FPS override

Quote from: aleks on September 09, 2014, 12:52:12 AM
I would need to have a silent pic with exposure time of say 10 seconds (the longer the better) while the LV shows uninterrupted video on the screen. Not sure if this is possible.
No, that is impossible. From what data would the display be updated while capturing an image? The sensor can't make two different exposures at the same time.

aleks

Quote
No, that is impossible. From what data would the display be updated while capturing an image? The sensor can't make two different exposures at the same time.

I did not use FPS override because I was hoping that the readout of the sensor was done in such a way that a long exposure image could be captured while in uninterrupted LV (non-destructive read), but I guess I was hoping for too much :D

Maybe something like this would be possible if we could control the readout such that odd pairs of lines are read with 25 fps for the LV display, and the rest are kept on hold for a user defined exposure time. Before coming across this thread on the forum, I was reading around to find out whether this was possible; probably not, and even if it were, it is highly unusual and would be very time consuming to research and implement. In conclusion, it was a nice idea, but too ambitious. Thanks for the support.

SpcCb

QuoteI did not use FPS override because I was hoping that the readout of the sensor was done in such a way that a long exposure image could be captured while in uninterrupted LV (non-destructive read), but I guess I was hoping for too much
Even if it was possible, and with using zoom mode to get an unscaled image of the guide star in the field, it looks it would be hard to compute a precise drift analysis on fly with taken silence picture in parallel. Maybe we will get some black out during several seconds because of silence pictures computation.

By the way, using ML as an autoguiding system should be very nice. Without using the camera for imaging, just as a guiding camera.
But is there a way to send ST4 commands (0/I output orders in 4 ways) directly from the body through the USB ?

dmilligan

Quote from: SpcCb on September 09, 2014, 11:33:07 AM
By the way, using ML as an autoguiding system should be very nice. Without using the camera for imaging, just as a guiding camera.
But is there a way to send ST4 commands (0/I output orders in 4 ways) directly from the body through the USB ?
I have thought about this too. I think you'd need a arduino or something to act as USB PTP host and translate commands. The other option would be a photo-transistor taped to the LED, and some kind of decoding circuitry, then you could send commands via LED blinks.

aleks

SpcCb: I have a specific astro-photo setup, and was not planning too long exposures. I needed the LV video as a reference (with a cross-hair overlay perhaps) to do periodic corrections manually (my mount is cheap and it tends to lag after a few seconds at the focal length I use), so not a proper autoguidng setup, but an exploration of the possibilities using what I have. And the silent image accumulating during that time would have been the exposure I need. At full frame there would have been probably an issue with detecting movement of the guide star, but there is an ability within ML to show the 5x zoom in a separate window during the LV, so maybe that would have been enough.

dmilligan: Indeed, there is a Raspberry PI project out there which takes an image input and does the rest (calculations and sends control signals to the mount). To do all that in ML may turn out to be too much for the camera.

To come back from OT, since I don't have a guidescope, my plan was as stated above. If per line readout of the CMOS can be achieved, then some possibilities exist, but I understand that it would take too much effort. ML is already impressive enough.

SpcCb

Ah yes, through the PTP protocol it should be possible. An Arduino also should be enough to do protocol conversion (PTP -> ST4).

aleks > What is the ratio between 'how the mount can track fine' and 'how long exposures are expected'? (should be interesting to know the sampling on the imaging camera and an tracking error log on the mount) Because if it is only a question of polar alignment drift compensation _every nn seconds_ there are other solutions (if you see what I point :) ).

aleks

QuoteWhat is the ratio between 'how the mount can track fine' and 'how long exposures are expected'? (should be interesting to know the sampling on the imaging camera and an tracking error log on the mount) Because if it is only a question of polar alignment drift compensation _every nn seconds_ there are other solutions (if you see what I point :) ).

The setup is very basic, it is a Meade ETX Alt-Az mount which has poor tracking when the camera is attached. It can manage only 3 seconds of good tracking before it needs a correction. My goal is to increase the S/N of the images, hence I wanted longer exposures on the cheap, and wanted to try guiding while exposing, hopefully using silent pics with ML.

SpcCb

With an Atl-Az mount it will be useless to try to guide because of the field rotation.
And even with a very high S/N (hundred of silence pictures registered for example), you will be limited by the short exposure time before that the rotation was visible : You could get a very high S/N but with a low magnitude. (plus I don't speak about FPN inter-correlation if you don't dither between exposures)

The first thing to do should be to DIY a Lat. support to pass the mount on equatorial.
(or to buy it, but well.. I don't want to promote something)

Beside, Full Resolution Silent Picture is very interesting in planetary imaging. Specially with some ADTG optimizations to get the best dynamic and a low [none] electronic amplification.

Levas

I can't succeed in averaging multiple dng's in a full res silent picture MLV file.

I did some Full-res silent shooting with MLV output (Got a build from August 8 for the canon 6d).
Now I got multiple files M00, M01, MLV etc.
Trying to process them with mlv_dump on mac didn't work. (got newest mlv_dump build from August 31)

This is what I get:

MLV Dumper v1.0
-----------------

Mode of operation:
   - Input MLV file: '00000000.MLV'
   - Rewrite MLV
   - Output only one frame with averaged pixel values
   - Output into '00000000_frame_'
File 00000000.MLV opened
File 00000000.M00 opened
File 00000000.M01 opened
File 00000000.M02 opened
File 00000000.M03 not existing.
Processing...
/Users/wij/Documents/Foto's en video's/Video/2014/2014-09-16 - Darkframes/10seconde/Average-mlv.command: line 7:   619 Segmentation fault: 11  /usr/bin/mlv_dump -o ${BASE}_frame_ $FILE -a
logout

[Proces voltooid]

So I thought, let's do it again, but with a maximum of 90 pictures, so I end up with one single MLV file.
Also didn't work:

MLV Dumper v1.0
-----------------

Mode of operation:
   - Input MLV file: '00000001.MLV'
   - Rewrite MLV
   - Output only one frame with averaged pixel values
   - Output into '00000001_frame_'
File 00000001.MLV opened
File 00000001.M00 not existing.
Processing...
/Users/wij/Documents/Foto's en video's/Video/2014/2014-09-16 - Darkframes/Average-mlv.command: line 7:   704 Segmentation fault: 11  /usr/bin/mlv_dump -o ${BASE}_frame_ $FILE -a
logout

[Proces voltooid]

Attero

@Levas: Why would u average, lets say, 400 frames or even 90? If u want to extract single dng's u have to comand "--dng" but i guess u know that. If its only for experimental purpose, extract the dng's, develop them with, lets say, LR and average them with Photoshop. Maybe u could upload the 90 Frame .mlv so others can try to average the file`?

PS: I put together some sequenzes i've shoot with the new modul. I hope u enjoy this short clip.
A High-Five to A1ex, Dmilligan and the whole Magic Lantern Team for this modul!


Levas

I want to average a whole lot of dark frames for dark frame subtraction.
When I shoot a time-lapse at night, stars in the sky, I shoot at iso 6400 (about 10 seconds exposure time).
Now these pictures of course contain noise  8)

Now I've a bunch of photo's taken with the same iso and exposure time and with the lens cap on...so called dark frames, these frames contain nothing but noise...
Now I want a single dng file which represents the averaged out dark frames.
This single dark frame I can subtract from the normal time-lapse photos to reduce noise.

The single MLV with the 90 pictures I have 2,5 GB... so I better make a smalles one with less photo's to share  :P

Levas

"PS: I put together some sequenzes i've shoot with the new modul. I hope u enjoy this short clip."

I like the clips at the end with the low sun above meadows  :D
See on youtube that you did things in 4k timeline.
But the video on youtube in 1080p looks very compressed, I expect far more detail with your workflow.
What where your export settings in premiere, which codec/format and what bitrate ?

Attero

Ahh, ok now i got the point.

Export settings were 1080p | 25fps | h.264 | High Profil | Level 5.0 | VBR1 | 15mbit/s avg. 35mbit/s max.
I uploaded the clip at 1080p because my Inetconnection is very bad, 2mbit - 45kb/s upload...
And yea, the Youtube compression sucks... dunno what to do, to let the video look more crisp and clean. Any tips or hints?
Im going to upload the clip to vimeo very soon. Lets see what they do with the file.

Levas

Nothing wrong with the export settings, so it's probably the youtube compression.
And yea, 2Mbit sucks, same here, 2,5Mbit max, is all I can get  :'(


ansius

go for 24fps, that leaves more bitrate for actual frame, meake it as clean you can - youtube hates grain, other - use vimeo :) and preferably get better internet if you can.
Canon EOS 7D & 40D, EF-S 17-85mm IS USM, EF 28-300mm IS USM, Mir-20, Mir-1, Helios 44-5, Zenitar ME1, Industar 50-2, Industar 61L/Z-MC, Jupiter 37A, TAIR-3
http://www.ansius.lv http://ansius.500px.com

mathi

Could someone please compile this for the &600D/Ti3 and post the link here? I would really appreciate that.

itsskin

Hi guys! My 5d3 crash quite often wit this module. Like every 100 pics. Here is the log:

ASSERT: GetMemoryAddressOfMemoryChunk( GetFirstMemChunk( pMem1AllocateListItem->hMemSuite ) ) == pMessage->pAddress
at SrmActionMemory.c:1505, task RscMgr
lv:0 mode:3


Magic Lantern version : Nightly.2014Sep21.5D3113
Mercurial changeset   : 1a0167779348 (fullres-silent-pics) tip
Built on 2014-09-21 07:51:37 UTC by magiclantern@magiclantern-VirtualBox.
Free Memory  : 139K + 3738K

Shooting DNG every 10 sec to Lexar x1000 CF card

a1ex

So far I've got these crashes only when press shutter halfway *while* a silent picture was taken. There is a race condition when switching the GUI mode to QuickReview - Canon code unlocks the GUI and the half-shutter press returns to LiveView, where it can't continue, because the resources are already allocated for picture taking.

I've never got the error during a timelapse (~300-500 frames).

itsskin

Consistently crash here with built-in intervalometer. I do not touch the camera at all.

a1ex

Can you post your SETTINGS directory?


bhursey

I have been out out coding for about 8 years LOL but I am going to see if I can get this built for my 60D I will let you all know what works and what does not.. My build environment is being finicky..   If I can get it working. I am on Saturday going to be in the mountains shooting start time lapses this might be handy.. :) With a 50mm f1.8 at 6400iso to expose the milky way I need a 14 second exposure..  So considering the limit is 15 seconds. This may work..

jtvision

Hi All,
I've searched the forum (and also found 46 min youtube video) on how to compile ML features.... but it seems very complicated for me. Can anybody please help to compile this for 600D or 5D Mark III?

bhursey

Yah its not fun I have been banging my head against the wall for 2 days. Would love a build for my 60D but I want to learn. Still if any one feels like building one for the 60D while I bash my head against the wall I am all for it..

I am getting hung up with.. The following I have searched high and low and tried a ton of different things. This is actually quite a bit of progress from what I had. Note I do do a make clean prior. Also this is on a mac.

brians-mac-mini:fullres-silent-pics brianhursey$ make 60D
make -C  /Users/brianhursey/Code/fullres-silent-pics/platform/60D.111
[ VERSION  ]   ../../platform/60D.111/version.bin
[ CPP      ]   magiclantern.lds
/bin/sh: -v: command not found
make[1]: *** [magiclantern.lds] Error 127
make: *** [60D] Error 2

brians-mac-mini:fullres-silent-pics brianhursey$ cat Makefile.user
ARM_PATH=~/Code/gcc-arm-none-eabi-4_7-2013q2
ARM_BINPATH=$(ARM_PATH)/bin
GCC_VERSION=4.7.4
CC=$(ARM_BINPATH)/arm-none-eabi-gcc-4.7.4
OBJCOPY=$(ARM_BINPATH)/arm-none-eabi-objcopy
AR=$(ARM_BINPATH)/arm-none-eabi-ar
RANLIB=$(ARM_BINPATH)/arm-none-eabi-ranlib
LD=$(CC)
HOST_CC=gcc
HOST_CFLAGS=-O3 -W -Wall
UMOUNT=echo
CONFIG_TCC          = y
CONFIG_MODULES      = y
CONFIG_CONSOLE     = y
PYTHON=python

brians-mac-mini:~ brianhursey$ cat .bash_profile
# Set architecture flags
export ARCHFLAGS="-arch x86_64"
# Ensure user-installed binaries take precedence
export PATH=/usr/local/bin:$PATH
# Load .bashrc if it exists
test -f ~/.bashrc && source ~/.bashrc

if [ -f $(brew --prefix)/etc/bash_completion ]; then
    . $(brew --prefix)/etc/bash_completion
fi
export PATH="$PATH:$(brew --prefix coreutils)/libexec/gnubin"


Also /bin/sh does esxist..

brians-mac-mini:fullres-silent-pics brianhursey$  /bin/sh -v
sh-3.2$