Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - SpcCb

#51
General Chat / Re: Lens effects on photons
October 16, 2014, 04:34:10 AM
Maybe you should see this problem with a different point of view because telescopes (most of them, specially big one) have a secondary mirror, so (light) obstruction.
If you made a test with two telescopes a the same f/d, same focal and the same aperture but with a different obstruction you could see major variation in light energy at the focus plan.
It is a fact who explain why refractors are more luminous than reflectors telescopes.

So maybe you should use 'collecting surface' area instead of f/d or aperture (?).
#52
General Development / Re: Dynamic range and Equivalence
October 15, 2014, 10:57:51 PM
Ah, the FWC x nbr of pixels. I understand know.

But what the goal? Do you plan to make a bin(full sensor size) image of 1x1px? :)
#53
General Development / Re: Dynamic range and Equivalence
October 15, 2014, 10:48:08 PM
Quote from: a1exAfter watching this video, I was curious to see how the graphs would look like if I would use the full-well capacity of the entire sensor instead of ISO on the DR graphs.

With same-size sensors, there should be no difference. The difference appears when comparing sensors of different sizes: at the same ISO, the smaller sensor would get less light (fewer photons).
(...)
I'm not sure to understand : Are you saying FWC and QE are function of the sensor size? I mean the entire sensor die, because of the area differences between them?

(I watched first minutes of the video and stopped when I saw tones of b.l.ch.t errors)
#54
Quote from: g3gg0(...)
but the CFA filters are not that narrow! they have a very wide range of sensitivity and are - unlike tristimulus values - correlated.
check the spectral response of a CFA pattern, like here for the D90: http://vitabin.blogspot.it/2013/04/spectral-response-of-nikon-dslrs-d90.html
so every RAW-RGGB pixel's component is correlated to the value of the other components of that pixel.
the conclusion is, there is no 1:1 mapping between the real spectral color that occurred and the RGGB representation.
Indeed.
Even if we have a very narrow band filter||source, each pixel is not only illuminated by the corresponding wavelength; Red received a part of light in the green and blue wavelength, etc.
Plus, regular demosaicing get informations from contiguous pixels to determine the RGB values of each pixel.

Quote from: g3gg0if we get an accurate spectral response curve, like the one on the webpage above could be, cant we recover the probability map of every single pixel which (spectral) color
would have led to the RGGB values we find in the raw pixel values?
using this statistical data, recovering unusual color phenomens and a more perfect CA should be possible, aint it?
(...)
Theoretically yes, it is possible to reduce color artefacts (I mean false color generated), but you should introduce a diffraction model in the calculation (it's why AMaZE demosaicing is smart done), and some other stuff, because of the nature of the light.

Quote from: chmee(...)
idea seems logical. the "green" CFA-Pattern (laying between R and B) is able to optimize R- and B-values (simply the accuracy of colors), because of its crossover-range.
Just a note about these QE graphs; be careful, they are not representing the intrinsic QE of sensors: analysis are made with filters (IR-cut, VAF, ..) front of the sensors (we can see the difference with the 40D mod in the same page).
#55
Fault on my side; CRC error during download... (I use a crap VPN)
Re-download and it's OK. ;)

BTW I'm surprise of the size of the 6D file; with 5D2 'Moon' Dual ISO are very heavy, something like 30~40Mo, more than an regular CR2.
Maybe a question of noise... Or optical sampling/focus.
#56
Audionut > Nice! ;)

Edit: the CR2 looks... Strange (?); 19.1Mo, with no header informations.
#57
Quote from: LevasThanks for the detailed explanation, lot's of post processing going on there  ;)
And now I noticed the wavelength filters: Hα - OIII - Hβ (must read better next time  :P )
So 3 wavelengths and the rest of the light is blocked out.
Indeed, inferential narrow band filter was used; light pass for 7~8nm around the frequency, all is block out.
Plus the 5D2 used is also modified, the internal IR-cut filter is not the original one; replaced by a full spectrum Astrodon filter (400~700nm):

This is better if you want to use special filters because the original Canon IR-cut block more or less some wavelength, so you loose a lot of _interesting_ informations.

Quote from: LevasSometimes I make time lapses of the night sky, with maximum exposure times of 15 seconds (I don't have a tracking mount...)
I live nearby a big city, so lot's of sodium street lights over here.
Do you think I can benefit from a clip in (clicks in DSLR in front of the mirror) type of this filter ?
http://www.astronomik.com/en/visual-filters/uhc-e-filter.html
Of course I don't expect a filter like that too give results like the photo you posted  :P (For that I need a tracking mount and lot's of knowledge in post processing astro pictures)
EOS clip filters are very interesting; you can use them with a lot of optics/lens (specific astronomical filters are design to be used only with astronomical optics because of the angular light flux issue). But I never used EOS XL clip filter for full frames camera yet (I see you use a 6D). It should be done next months. It looks tricky, because of the mirror (block in up position?).

The UHC-E filter is adapted if you have a very polluted sky, it gives very good results. All sodium (high and low pressure lamps) emissions are blocked.
But if the sky is moderately polluted, IDAS LPS gives better results because this filter preserve a kind of spectrum continuum (for RVB imaging in one shot) but some sodium emissions are not blocked.
In images:

Left to right: no filter, LPS, UHC

Quote from: LevasDid try to capture the summer milky way a few months ago:
https://drive.google.com/folderview?id=0B1BxGc3dfMDaSkNlZ0ZtaHN4Ym8&usp=sharing

Although not bad, it could use a lot more contrast(and I already added a lot contrast in Lightroom in this example).
Not bad! We can see NGC7000 Nebula, like in the image top of this thread ;)
Your sky doesn't look so polluted in fact. If you are in Europe I know good pollution maps, if you need to have a idea.

Quote from: a1ex
You have used Dual ISO on a such a long exposure? It worked?! Can you share a CR2?
Yes, I did. :D
There's a lot of hot pixels without cooling, and we can't use the cr2hdr hot pixels removing in this case (because of stars) so it's a bit tricky, but it works. With a good cooling it works better, of course (it's better than in regular photography actually).
But the major issue is the spacial resolution loose, important in astrophotography.
Here it was not a problem because it was not for an high end imaging, just for filter tests (we can see horrible reflections in OIII band) and I planned to bin + resample since the beginning, so..
I'll see to PM you CR2 ASAP (I'm not front of my home computer for couple of days).

Beside, the most important stuff in my eyes is the ADTG/DFE Gain + black offset + white clipping definition, drive the Canon sensor like a specialized/scientific/astronomical camera is enormous. I'm thinking to make a ML module with special tweaks for astronomy (and some specialized functions like FWHM map assist in liveview), just need to find time to do a public friendly version.
#58
Indeed Levas, only one frame of 600s per wavelength.

Multi-spectral AMaZE demosaicing is a kind of AMaZE demosaicing _like we can find in last dcraw versions or RawTherapee_ with multiple levels analysis computed from the energy distribution across different spectral bands in a discreet cosine transform (DCT) block, to get better results. Specially with N&B frames what we get in this case with narrow band filters.
Bin2 is a regular pixel binning, by 2x2. It makes a 2x2 pixels matrix to become 1 pixel by summing signal levels in the matrix. It means we get SNR~4x.
Dynamic compression is also a regular operation to visually get more details on discreet areas. I used a HDR Multiscale Transformation (HDRMT), made by Vicent Peris.
Richardson-Lucy deconvolution is a special operation to enhance spacial informations.

Pixel binning, dynamic compression and Richardson-Lucy deconvolution can be done with common astronomical softwares like PixInsight or Iris.
Multi-spectral AMaZE demosaicing is an home made stuff, with MIDAS.
#59
Modules & functions w/ ML:
• Long exposure (Bulb) with mirror pre lock-up @600s
• Dual ISO @1600-100
• ADTG/DFE Gain: -23 w/ e-gain @0
• Black offset level: 256 ADU
• White clipping level: 16383 ADU

Setup:
• 5D2 w/ cooling capability, device off (passive mode), sensor temperature @12°C (end of exposure)
• Optic: 400f/2.8L mod. working @f/2.6
• Filters: RGB composite with Hα - OIII - Hβ, 1x600s per wavelength
• Astronomical mount for sideral tracking w/ pseudo random dithering between exposures


North America Nebula (NGC7000)

Post:
• CR2->DNG w/ cr2hdr-20bit v.0eabcb0 - 2014-08-29
• FPN reduction w/ full resolution master made with 301 samples
• Multi-spectral AMaZE demosaicing w/ 100 pass per wavelength
• Bin2 on each wavelength
• Dynamic compression, same on each wavelength
• Richardson-Lucy deconvolution w/ 49 pass per wavelength
• 1000px resampling in PS

Divers:
• No dark frames or noise reduction
• Made in Vexin Regional Natural Park (France), September 2014.
#60
Raw Video / Re: Raw video on 5DMK2
October 02, 2014, 01:30:06 PM
QuoteI keep reading that Sandisk cards are cheaper.. But in my experience Sandisk cards are always more expensive...
What I find:
Sandisk Extreme Pro 128GB 160mb/s; $350
Lexar professional 128GB 1066x; $250

Am I missing something..?
Maybe it depends of the country (market place) ?
In Europe Sandisk is more expensive than Lexar, and a quick view in Asia show me it looks same.

However Sandisk _specially last products_ are more reliable in extreme conditions; we use it because all other brands (tested) cannot work fine at -30°C, +60°C, under HF vibrations, condensed humidity, etc.
Of course, for a regular use no need to pay an extra cost for this.
#61
Modules Development / Re: Full-resolution silent pictures
September 25, 2014, 02:57:11 PM
Quote from: a1ex
@SpcCb - you mean the sensor heating done before taking the picture, while the camera was in LiveView, right?
Absolutely, activation of the liveview during a couple of minutes makes sensor reach Δt~30°C[1] and Δt~40°C[1] for the mainboard (CPU + FPGA + ASIC + LCD heating) witch participate to the heating inside the body (and the sensor heating by the way).

You can easily see it by monitoring the temperature given by the camera, who is the temperature of the probe in the motherboard (slightly different from the real sensor temperature).
From scratch (ambient temperature) it rises quickly during next minutes when you activate the liveview.

It is not very noticeable _in thermal signal_ with 'not so long exposures', but IMHO silent picture should not used for 'very long exposures' needs in astrophotography or low flux imaging.
However, silent picture is not very interesting with long exposures because it means we don't take a lot of pictures, so no shutter mechanism stress. So...

[1] => tsensor/tambient
#62
Modules Development / Re: Full-resolution silent pictures
September 24, 2014, 01:35:16 PM
bhursey > Noise difference is maybe because you use liveview for frame recording; I made some series in deepsky imaging with silent picture and noted that heating of the sensor with liveview activated during a long time just before shooting (and obviously heating of the mother board too) generate more thermal signal on frames.
This issue is also well know without using silent picture.
#63
Modules Development / Re: Full-resolution silent pictures
September 09, 2014, 11:22:32 PM
With an Atl-Az mount it will be useless to try to guide because of the field rotation.
And even with a very high S/N (hundred of silence pictures registered for example), you will be limited by the short exposure time before that the rotation was visible : You could get a very high S/N but with a low magnitude. (plus I don't speak about FPN inter-correlation if you don't dither between exposures)

The first thing to do should be to DIY a Lat. support to pass the mount on equatorial.
(or to buy it, but well.. I don't want to promote something)

Beside, Full Resolution Silent Picture is very interesting in planetary imaging. Specially with some ADTG optimizations to get the best dynamic and a low [none] electronic amplification.
#64
Modules Development / Re: Full-resolution silent pictures
September 09, 2014, 05:14:02 PM
Ah yes, through the PTP protocol it should be possible. An Arduino also should be enough to do protocol conversion (PTP -> ST4).

aleks > What is the ratio between 'how the mount can track fine' and 'how long exposures are expected'? (should be interesting to know the sampling on the imaging camera and an tracking error log on the mount) Because if it is only a question of polar alignment drift compensation _every nn seconds_ there are other solutions (if you see what I point :) ).
#65
Modules Development / Re: Full-resolution silent pictures
September 09, 2014, 11:33:07 AM
QuoteI did not use FPS override because I was hoping that the readout of the sensor was done in such a way that a long exposure image could be captured while in uninterrupted LV (non-destructive read), but I guess I was hoping for too much
Even if it was possible, and with using zoom mode to get an unscaled image of the guide star in the field, it looks it would be hard to compute a precise drift analysis on fly with taken silence picture in parallel. Maybe we will get some black out during several seconds because of silence pictures computation.

By the way, using ML as an autoguiding system should be very nice. Without using the camera for imaging, just as a guiding camera.
But is there a way to send ST4 commands (0/I output orders in 4 ways) directly from the body through the USB ?
#66
Nothing to say about the video but just noted a gurl had a wristband with: 'I ♥ ML'. :D
#67
Just a note for fresh users who want to use this VM solution: Since Ubuntu 12.10 (Quantal Quetzal) reaches End of Life on May 16 2014 (the release in this VM) you should change some things in the repository configuration because the default setup is out of date, so installation and/or upgrade of softwares will not working as is.

-> go in: etc/apt, open sources.list with root privileges, like sudo vi /etc/apt/sources.list
-> replace all entries *archive.ubuntu.com* to *old-releases.ubuntu.com* and save the file
-> update all of this: sudo apt-get update && sudo apt-get dist-upgrade

Now you could use apt-get or the software manager as your needs, like install docutils stuff to compile last releases of ML.

Note: Maybe it should be interesting to upgrade the VM with a modern distro for new users. I'm currently using a LMDE for dev works mainly because easier to maintain in time (semi-rolling distribution based on Debian Testing), if it could be relevant I could see to generate a basic VM file.

Informations:
-> https://wiki.ubuntu.com/Releases
-> http://fridge.ubuntu.com/2014/05/01/ubuntu-12-10-quantal-quetzal-reaches-end-of-life-on-may-16-2014/
#68
Quote from: a1ex on July 01, 2014, 05:11:15 PM
(...)
If you call these routines without going to LiveView first, the result will be a dark frame (interesting for astro - you can take 300 dark frames without wearing the shutter mechanism).
Very nice. ;D

Quote from: a1ex on July 01, 2014, 05:11:15 PM
(...)
Limitations

- The fastest shutter speed I've got is around 1/10 seconds (very rough estimation by comparing brightness from a regular picture). With regular pictures, faster speeds are done via mechanical shutter actuation.
- Long exposures are fine up to 15 seconds (longer exposures will crash the camera).
(...)
So it is a solution for time-lapses, not really for astronomy (15s is a bit short), for the moment.

BTW I'm pretty sure that it is possible to get same params than a regular picture (exposure duration etc.), because the Canon LX NR do it (we can do 15mn dark frame for ex.). The question is 'will it be possible in LV'..
#69
Seems to be an old issue, I reported it last year: -> http://www.magiclantern.fm/forum/index.php?topic=4791.msg28383#msg28383

In fact, even when the camera is on 'off' some electronics are still active; you can make the test by removing/attaching an EF lens -> the red LED will blink.
So maybe _in certain conditions even with waiting the 'shutdown' red LED blinking_ ML drains the battery on 'off' position.
Note: I even had this issue without removing the CF card, just by switch off the camera after use it.

For people who have this issue: do you use the 'standard' ML v2.3 or a nightly build?
#70
Quote from: a1ex on June 11, 2014, 07:47:32 PM
(...)
- the readout speeds may be different (could this cause noise differences?)
(...)
Yes, RON is function of the read speed.
It could be an explanation _or a part of_ for the RON value differences.

For example, 2~5e- RMS corresponding to a read speed about some 10E+2 MHz and 6~9e- RMS corresponding to a read speed about a couple of 10E+1 MHz.
Note; There's some other hardware characteristics who defined RON, not only the read speed (photodiode current, etc.)

Beside we have to not forget that Canon CMOS are CDS (Correlated Double Sampling), so the extra low RON that we get is 'hardware optimized' and it could be adjusted to fit on demands (specific video operations etc.).
#71
Quote from: a1ex on June 07, 2014, 06:53:42 PM
I expect temperature to cause differences in the read noise (in this model), but I'm interested mostly in the shot noise. So it should be fine.
Indeed, analysis I made show that temperature does not significantly affects RON. Values are slightly under 1% per a 50°C variation, in the range -50°C -> +60°C, temperature recorded directly on the back side of the CMOS with a special device (the internal thermometer is not relevant because placed on the mainboard).

Beside, the thermal signal well follow the CMOS temperature, causing major noise issues when we use our cameras with a short exposure time when the temperature rises (thermal signal is _mainly_ function of the temperature and the exposure duration).

So when we made noise analysis it is important to use a process where CMOS temperature is included.

Quote from: a1ex on June 07, 2014, 06:53:42 PM(...)
From the above data, I'll try to guess the binning factors from LiveView (and I'll ask SpcCb to double-check what follows):
(...)
Very interesting data.
But I don't figure yet why RON blow up @5x with high ISO face to photo mode. Maybe a level intercorrelation inside the logic pixel intrication area because of the .8n reduction factor.

BTW I did not know there was a reduction factor in 5x video mode; I cant make tests with 5D3 here, 'does not have Bayer matrix anymore, but I'm interested to make test to see if it's the same thing with a 5D2. What kind of resolution chart should we use?
#72
Lets take SNR as reference, because overall values could be hard to compare, in a simple and not so realistic example:
If in crop mode (5x) you get 3 and in 1080p you get 12 it means that you should do a bin2: 4 pixels group, hence 4x3 and so 12.

SNR increasing by power of 2 of the pixel matrix size; bin2 -> 2x2 pixels -> 4 pixels -> 4x SNR, bin4 -> 4x4 pixels -> 16x SNR...

In the real world it could have some variances, but it should be close to that.

Note: Could you anticipate the binning value by comparing the full resolution to the 1080p? I was thinking about 5760px -> 1920px, so a bin3 BTW.
#73
Quote from: Audionut on June 06, 2014, 07:44:16 PM
(...)
Thoughts on the data:
At first it appeared like movie mode was entirely Read noise limited at lower ISOs.  I suspect that movie mode is somewhat rather Read noise limited (compared to photo mode), however, the curve was extending above the top of the graph, so I can't be certain.  It needs some automatic fitting of the curve, or at least, different scales on the graph for movie mode and photo mode.

The DR is greater in movie mode then photo mode.  Now, we know the ADTG registers don't change, so if we also consider that the the images are more Read noise dominated, then I believe that this shows pixel binning.
(...)
To confirm if pixel binning is done it's quite simple: RON should be the same before/after binning (only the signal value change), or with a normalized signal value the SNR should be up in function of the number of pixel binned.

PS. I re-run some test with raw_diag but I don't find a way to get good samples with artificial lights. Audionut > Is your scene is by day?
#74
Quote from: a1ex on June 06, 2014, 01:05:09 AM
I only ran a test shot to check that memory error. I'd prefer not to sit down and take a bunch test pictures that other people can do very easily.
OK -.^

Quote from: a1ex on June 06, 2014, 01:05:09 AM(...)
What would be interesting is to run it in LiveView and figure out how many pixels are averaged when downsampling. Can you help me figure out the math for this? (that is, how the noise should behave if N sensels are averaged on the CMOS sensor, before amplification - or wherever you think Canon might do this step)
Of course, if I can help.

I thought resizing was done by pixel groups exclusion (?). Witch is causing, with VAF (working as a median filter), aliasing/moiré problems (because of Nyquist limit) except in x5 video mode (where CMOS surface is recorded at full resolution). So in this case, pixel informations (levels, etc.) should be the same before/after downsampling resizing.

Or are you speaking about the video feed directly on the camera back screen?

Beside, I well know about noise, FWC, etc. in case of binning downsampling (hardware or not BTW) but obviously this is not the case here (?). And I'v never seen hardware CMOS average capability (looks technically improbable), even it is _maybe_ possible.
#75
Quote from: a1ex on June 06, 2014, 12:33:22 AM(...)
However, in your ISO 100 sample, you don't have any data points above 8 EV. This is very important for a good fit.
(...)
Ah, it is the why.
Though a part of the surface was overexposed (burned in fact) and another well dark. As usual..
I will see later to remake a run if needs, because you look to already have 5D2 data (?).

Beside, it will be interesting to run this kind of analyse with a special mini_iso conf (to see the FWC gain) and dual_iso feature (the FWC gain should be [...!] ). -.^

note: I don't know if it's a UI restriction, but the unit for FWC is e- (e is negative actually).