Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - gotar

#1
Feature Requests / Fetch settings from a photo
April 04, 2014, 09:44:19 AM
I wonder whether this would be possible to use any picture's taken EXIF data as a template to the camera? I mean the ISO value, WB settings (temperature, offsets), expo etc. (maybe with image format/size too).

I see two usecases:

1. easy coming back from experimenting/changed conditions to user's favourite/casual settings (or after lending device to someone else or making any change by mistake),

2. adjusting settings for current scene - I sometimes find myself shoting a few photos with different settings just to choose the best one (mainly alternating WB as the 500D's auto isn't too good).

I realize these scenarios are not what pros do (like lending the camera or shoting blindly to find best values - especially for WB settings that don't matter in raw), but maybe this would be useful.

This is a copy from https://bitbucket.org/hudson/magic-lantern/issue/1905/fetch-settings-from-a-photo since that tracker isn't used for feature requests in ML.
#2
Quote from: Audionut on September 24, 2013, 11:45:48 AM
Noise is the standard deviation. That's why it has become noise in the first place, because it has that deviation.  If it didn't deviate, we wouldn't be having this discussion.

I'm sorry to inform you, but you're horribly wrong. None of the 3 sources you've given states this, this only proves you don't know enough physics to understand what you read. Noise is the bogus signal (originating from different sources, I've elaborated the main ones appearing on the sensor), it's deviation is only one of the values that describe it.

Quote
As a photographer I only need to know that the shot noise is a square root of the photons collected.  And hence, the SNR is also this square root.  And hence, more light captured, more SNR.

As a photographer you need to know the last one, but not only. Because the previous statements are wrong - as noise of the sensor combines different sources, you should also know that total noise raises with temperature, so you shoud keep your sensor cool.

Quote
I am really not interested in discussing this further when you continue to try and debunk this theory with only your own assertions, that differ from numerous other sources available on the subject.

http://arxiv.org/pdf/cond-mat/9905024.pdf
http://optical-technologies.info/noise-in-photodetectors/

None of these differ in any way from what I've said. Apparently you're one of the people that can't see the difference between voltage, current, charge, field and power.
#3
Quote from: Audionut on September 22, 2013, 07:43:46 PM
Which is totally unrelated to the original statement.

Indeed... the purpose of base ISO in dual ISO is to maximize the DR, so that's the best to start with and remains default. l_d_allan asked about defaulting to ISO 200 (with HTP) and this is not valid in general. Whether that might be the case is the subject of ETTR/ITTR for particular scene, so I wandered too far having this subject began with so simple question.


Everything below is related to noise itself (some physics straightening bad explanations from the web), I was about to put this into separate post to be moved to some more appropriate place, but I don think this thread would be continued anyway, so here it goes.

Quote from: Audionut on September 23, 2013, 06:06:17 AM
http://www.clarkvision.com/articles/digital.sensor.performance.summary/index.html#intro
Quote
In the physics of photon counting, the noise in the signal is equal to the square root of the number of photons counted because photon arrival times are random

This is just a mental shortcut and should never be given as any kind of definition - it is the "standard deviation of shot noise equal to the square root of the average number of events N" as quoted from 'Shot noise' wiki article you've given (just before the wrong equation there). Proper equations are here: http://en.wikipedia.org/wiki/Signal-to-noise_ratio_(imaging%29 - note the definitions of signal and noise itself, they are much more complex than 'count photons' (hopefully they easy reduce when we need S divided by N). And the standard deviation have it's simple formula thanks to Poisson distribution. To conclude, having simple final formula for signal/noise ratio=n/sqrt(n) doesn't mean one can take apart numerator and denominator and say "signal equals number of photons and noise equals the square root of this number".

Secondly - the noise on the sensor is not related to some random photon arrival times, but time-dependent current fluctuations having 2 main sources: thermal noise and electrical charge quantization (this is the square-root contrubution, essential in optical frequencies - shot noise). There are also sensor area irregularities (semiconductor defects, inter-cell differences) and a bunch of quantum effects involved. That's why you get less noise from cooler sensors (thermal noise reduced) or larger cells (by either enlarging entire sensor or having less Mpix - more photons catched in a cell, less relative shot noise). This has nothing to do with any photons bouncing randomly like balls until they splash on sensor, but rather with different balls (electrons) having strictly specified capacity, that must be fully loaded by the first ones, before they proceed.
#4
Quote from: Audionut on September 22, 2013, 06:24:14 PM
Did you increase the amount of photos captured by amplifying the signal!  Did you increase the cameras ability to capture photons by amplifying the signal!

No, I didn't and no, I didn't. What I did however is increase camera ability to process captured signal.

QuoteNoise is not the 'N' part of the equation at all.  N is the number of events.

Actually there is N on the left side of the equation (S/N). Apparently this wiki article was written by someone, who knows photo, but is not fluent in signal processing. Note, that only 3 non-en versions did copy this buggy equation (using the same variable for different things). Just waiting for sth like sin(x)/cos(x)=in/co:)

Quote
As for your other questions.  I have better things to do then spend my time trying to explain technique, to then have those responses dissected in a negative way with supporting techniques/theories that hold no relevance to the original statements.

I'm sorry if that's what you feel I did - I can assure you, that any of your statements I get out of context and changed it meaning was not intentional (I'm not even sure where this happened, even after reviewing the thread once again). Nonetheless, please accept my apologies, I must put this on trouble understanding/expressing myself in foreign language.

Quote
If you want to continue to discuss matters with me, I am more then happy for you to do so, provided you support your assertions with accurate data.  Otherwise, I would ask that you discuss matters in a manner that doesn't belie your true understanding of the topics at hand.

OK, I've found examples I've seen once: http://www.luminous-landscape.com/forum/index.php?topic=56906.0 http://www.luminous-landscape.com/forum/index.php?topic=78677.0 explaining the effect I'm talking about.
#5
Quote from: Audionut on September 22, 2013, 11:03:53 AM
Ok!  So the original question was, "why use the lowest ISO".  I'm pretty sure I explained the reasons why you should always use the lowest ISO in a dual ISO situation, but you now feel the need to pick apart my post!

Sorry, but you didn't explained why the lowest ISO should be used always. OK, maybe it's me who extended the question meanwhile, I should have asked: "should this be always lowest ISO"?, but now I see you confirm this. I don't. Please, explain me why should I use 100/1600 instead 400/1600 having 2 stops on the right of raw histogram empty? No matter what ISO would be applied, nothing clips (and by 'empty' and 'nothing' I accept the 0.3 stop you've mentioned, from some ‰ of the frame  - or maybe it's all about this?).

QuoteNo, it's a constant.  Increasing ISO will not allow you to capture more photons.  The maximum amount of photons that can be rendered correctly will always be at the base ISO (no gain) of the camera.

Of course I won't catch more photons in higher ISO, that's obvious. And that base ISO gives maximum photon "spectrum" per ADC quantization in general. But general rules not always are true in specific situations - I can amplify the signal with inpunity as long as it won't clip in next blocks (still remembering the 0.1 stop lost in ISO 200, if that's the case then fine).

Quote
Quote from: gotar on September 22, 2013, 09:46:22 AM
Quote from: Audionut on September 22, 2013, 06:37:41 AMShot noise (the other significant contributor to total noise) is a square root of the number of photons.
Nope, it's the signal to noise ratio
http://en.wikipedia.org/wiki/Shot_noise#Optics

From the same article: SNR=sqrt(N). That is the signal to noise ratio, not the noise. Don't confuse them, noise is the 'N' part of the equation, while sqrt(N) is also stdev of noise shot (but not noise itself).

Quote
The SNR of f/2.8 - 1/1600s - ISO 100 is 4x greater then f/2.8 - 1/100s - ISO 1600.

While the exposure of the lower ISO is 16x longer then the exposure of the higher ISO (in this case), the rendered brightness is the same.

And SNR of f/8 1/500 ISO 100 is the same as f/8 1/500 ISO 400. So - when one cannot increase aperture (because he simply needs that DoF) and cannot increase time (dynamic scene, i.e. no ETTR or any other mean to increase exposure except for extra lightning), what's the benefit of using ISO 100 over ISO 400 (except these 0.3)?

Quote
And since you want to be picky, increasing ISO does not reduce read noise.  Increasing analog ISO reduces read noise.  Digital gain will not reduce read noise.

First of all I'm not picky - I just want to be as precise as possible, as this forum is being read by both professionals and amateurs and I wouldn't like to write nonsense nor mislead someone who's not going to see the difference between S/N and noise. Second - I was clearly talking about analog amplification, don't know where you get the digital mumbo-jumbo from, but it's also obvious, that one cannot improve signal in postprocessing (which "digital ISO" is).

QuoteDo you generally use a higher ISO to correctly render low light levels in cameras with noisy ADC's?  Yes?

Not generally but sometimes - it's not me who states about something (lowest ISO) being the best always.

So, not to entangle indeed this thread to the point of misunderstanding: do you see any specific situations when 400/1600 would not harm shot noise, lowering read noise? Or would you recommend using ISO 100 always? Disregarding the 0.1-0.3 DR loss.
#6
Quote from: Audionut on September 22, 2013, 06:37:41 AM
The purpose is to get extended DR from a single frame.

Why lowest ISO?

The maximum amount of photons (signal) capable of being captured, can only be captured at the cameras base ISO, where no digital manipulation and/or gain is occurring.

That happens only when one can do ETTR - but when exposure is somehow limited (camera in hand or dynamic scene limits the shutter speed and long DoF limits aperture) there might be plenty of space on the right side of histogram. That space does not extend DR, there's simply nothing that can use sensor maximum - is it? So the question is: is that space wasted? There are easy available charts of readnoise vs ISO (in constant exposure) - increasing ISO usually gets lower noise, especially for Canons. This technique is called ITTR (ISO to the right).

QuoteThe noise from the ADC only affects the shadow detail where it's percentage vs captured light is high (SNR).

Exactly. So you should get as bright image as possible without clipping == ETTR. But when ETTR is not enough or can't be used, it's better to boost signal before noise block, and then attenuate it. Any additional noise from pre-ADC amplifier would be attenuated as well then, so resulting S/N should be lower.

QuoteShot noise (the other significant contributor to total noise) is a square root of the number of photons.

Nope, it's the signal to noise ratio - we want this value to be as high as possible. No amplifiers/attenuators improve S/N, they only introduce their own. So we need as much signal as possible at the very beginning - that's the ETTR. What happens if we pass through the sensor and reach ADC? We get additional noise, independent on amplification value (in dB) - no difference between low and high ISO here. It also boosts sensor noise, but this doesn't matter, so is the signal. Next we go into ADC itself and the rest of the electronics - just to get even more noise. So what's the difference? None, unless read noise is worse for low signal, like it happened to be with Canon.

QuoteExcluding all other sources of noise, if we observe 2 different exposure settings,

f/2.8 - 1/1600s - ISO 100
f/2.8 - 1/100s - ISO 1600
[...]
Shot noise being the square root of the number of photons, so the SNR is,

1600/40 = 40:1
100/10 = 10:1

So the ISO 100 shot where the actual exposure (to photons (light)) was 16x longer, has a SNR ratio 4x greater then the ISO 1600 shot.

Assuming one can do 16 times longer exposure (i.e. ETTR). Consinder f/8+ aperture and 1/50 s - there's no way to catch more photons, the only thing you can do is to lower the impact of all the noise blocks inside camera. This requiress boosting signal ASAP - and hopefully ISO amplifiers introduce linear noise.

Quote
In summary, less photons (light), lower SNR.

The DR of the camera is limited solely by the cameras electronics.  ie:  The ability to capture light and where the noise from the cameras electronics is greater then the amount of photons captured.  In Canon cameras, this is the noisy ADC.  Here, analog gain before the noisy ADC helps to reduce the read noise.

We can observe the reduced read noise with increased ISO from the data from DxO.  Here for a Canon 5D Mark III,
[...]
The saturation numbers in the table above (for ISOs above 100) are an estimation of the amount of photons able to be captured, before the gain (ISO) overloads the ADC.

@ ISO 100 there is 11 stops of DR, limited by the the amount of photons able to be captured before overexposure (saturation), and (more importantly) the read noise from the ADC.  @ ISO 200, the DR should be reduced by (all but) 1 stop due to the limited photon capturing ability from gain overloading the ADC.  However, we can see that the DR reduced by only 0.1 stops.  The analog gain (ISO) was useful in reducing the read noise to a point where it offset the limited photon capturing ability.

That's exactly what I'm talking about.

Quote
Increasing ISO does not reduce shot noise, it merely applies gain to the already captured exposure.
Where we use ISO to boost the signal level before the ADC, we haven't increased the amount of photons captured, so we do not increase the SNR of the shot noise.

Increasing ISO reduces read noise (not sensor's of course).

QuoteIn general, the SNR of the shot noise is lower (more noise) in higher ISO shots, as we use higher ISOs to correctly render lower levels of light.

Not in general, but comparing to lower ISO shot with longer exposure. Assuming anyone can do this is definitely not general.

QuoteAnd since the base ISO controls the highlight data, we want to use the lowest ISO possible, not only to ensure the maximum amount of light is captured

Only as long as maximum amount of light is not limited anyway by other factors. These factors might be exposure itself or low light condition...

OK, having written this all I got myself the answer...:) When there are no conditions to overexpose picture, one doesn't need dual ISO at all, because some single ISO value covers entire DR available. That's why dual ISO uses base ISO.

...or maybe I'm not. If the base ISO at maximum exposure lefts for example - 2 stops on the right available, this doesn't mean there's nothing in the shadows for ISO 1600 to fetch. Why shouldn't I use 400/1600 then?
#7
Quote from: l_d_allan on September 21, 2013, 08:56:47 PM
My understanding is that DUAL_ISO works best with the "base ISO" at 100

No - "base" ISO should be base ISO, whatever value this is for specific camera (not necessarily 100). I thought the purpose was to get minimum noise from analog pre-A/D amplifier, however I've read that Canons have high post-ISO noise, so indeed - why lowest ISO?
The purpose of "recovery" ISO is to pull everything possible out of shadows. You won't get any more DR if that was you were referring to, that's simple physics, everything is about lowering noise.
#8
Modules Development / Re: DSLR Arkanoid
September 17, 2013, 11:17:59 PM
Sokoban?
#9
Quote from: a1ex on September 16, 2013, 03:34:05 PM
You already have it, just press the shortcut key for zebras in playback (see help, it's camera-specific).

Preview data is JPEG.

I thought this was calculated from visible screen as it changes it's shape on zooming in or out (to 1/4th of the screen when Canon's histograms are visible), and I've read it here lclevy.free.fr/cr2: "The third IFD is containing a small RGB version of the picture NOT compressed (even with compression==6) and one which no white balance correction has been applied." which seemed appropriate for raw histogram. However in some other cr2 spec I've read that this is camera-specific, so I assume you're right.
#10
General Chat / Re: Has canon even said a word about ML?
September 17, 2013, 03:45:56 PM
Quote from: larrycafe on September 17, 2013, 09:00:21 AM
haha, who knows? there could be some ML contributor coming from Canon  :P

You might laugh, but such things happens - there is alternate firmware for Ferguson DVB-S receivers (sharing, emu etc.) originating from the same source, as original firmware (you can find additional code in there, simply disabled), somwhere deep in Hong Kong.
#11
General Chat / Re: Has canon even said a word about ML?
September 16, 2013, 04:35:55 PM
Quote from: g3gg0 on September 16, 2013, 02:44:23 PM
believe me... not the code is the valuable thing - its the hundreds of hours testing and thinking.

they can simply reimplement it within few weeks, regardless of our license.

Sure - I bet they would write their own code anyway (they got all specs you might dream of, wouldn't bother with booting, bricking, could easily attach full-blown debugger, have control over base firmware, might seamlessly integrate with it etc.). I just answered the question: they can do anything they want without asking for permission or any kind of fees.

Quote
as long we didnt "patent" the methology (i.e. Dual ISO) they can use it for whatever they like.

Such patents are valid in USA, but not in EU. Well, here in EU I can do anything I want to the hardware I've bought (a propos C100/C300/C500/1D ML port - it would be safe from lawsuit). The rest is business evaluation, what's worth more: to have nice feature, to pay for license, or ship per-country firmware.
#12
Quote from: a1ex on September 15, 2013, 08:27:11 PM
For CR2 you need to *decompress* it first.

How about histogram from preview data (IFD2 I suppose)? It is uncompressed but small, so fast to read - much less accurate for sure, but it's subsampled from raw data without any WB impact AFAIK.
#13
General Chat / Re: Has canon even said a word about ML?
September 16, 2013, 01:34:14 PM
Quote from: RenatoPhoto on September 15, 2013, 02:33:59 PM
Can Canon or any other manufacturer incorporate this code in their cameras without any agreements or loyalties to ML?

Of course they can, why wouldn't they? GPL license allows anyone, including Canon, to redistribute the code and all it's derivatives (providing the sources on any request). And it doesn't have impact on their internal (closed-source) firmware as long as they won't infect it with any GPL code.

Different thing is all the artwork (logo, font shapes, documentation etc.) which requires appropriate licenses (not code-related like GPL). See Librecad example, which code was forked from Qcad (under Ribbonsoft maintance), but had to drop all the non-code parts.
#14
Having non writable card (either full or r/o) issues fullscreen warning and makes LV unusable. One might think it doesn't matter as shutter button doesn't work either, but when tethering it is reenabled and I can take remote shots (at least with my 500D) directly into my Linux box. However, with no advanced ML functions - the basic draws including histogram, waveform and vectoroscope are available by some trick: enter into ML menu (trash button) and make it transparent (zoom in button). This would be more convinient to get rid of the original Canon's warning (just highlight it in place, where available photo count usually resides) and try to fully operate, with no artificial restrains (at least when PC is connected, but some LV image analysis even without r/w possibility might be a support for taking pictures with another, less-featured camera).

Of course one might say 'put writable card', but in case of tethering it does nothing except wearing the card, power usage and some slowdown, as pictures are eventually removed from the card anyway. The card is needed only for running ML and might be small and slow.
#15
When in movie mode one can apply 4 effects that are applied on recording only - but cannot take for example negative photo. Is this some sort of hw limitation, or was intentionally differentiated? Shoting negatives is a low cost of 'scanning' analog films. While converting dozens of positive photos (of negatives) to negative (to make them positive) is rather a simply task, negative preview in M mode is something worth having. With custom WB it would allow direct negative 'scanning' with minimum postprocessing.
#16
General Development / Re: String localization?
September 06, 2013, 03:54:53 PM
Quote from: dmilligan on September 06, 2013, 02:56:20 PM
I assume by this you mean that 'native strings' are required to be compiled into the binary as string literals for fallback purposes. Why? you could simply revert to looking it up in your default english translation file if a particular string is missing from a certain translation.

Of course, you can even set a chain of fallbacks by specifying multiple languages in order of user being familiar with them (as LANGUAGES env variable does). But why do you even insist on removing strings from binary? I see no confirmation on that being real problem (unless all the languages would have to be loaded).

Quote
I'm still of the opinion that code that is not litered with string literals is much cleaner, all of your code is algorithmic and not sprinkled with data. Pretty much all the code I write professionally is like this. Like Marsu42 said, you can always add code comments for clarification.

Do you name objects in your code by their function, or some plain sequence? Do you export literal names of symbols, or some identifiers shortened to compress the binary? You know how this technique is called? Obfuscation.

Like a1ex said, it's not comfortable to maintain or read. Like I said, it leads to outdated translations (consider inverting function effect). If you don't like any literals, simply fold every printf in your editor, but do not obfuscate the sources.
#17
Feature Requests / Re: Thermal throttling profile
September 06, 2013, 02:36:35 PM
Quote from: a1ex on September 06, 2013, 02:25:19 PM
First step: somebody needs to wait for a few hours so the high temperature warning appears (to find out how to detect it). Then run property spy.

I will do this next time.

QuoteYou can also get away with the EFIC temperature, but you need to find at what value the warning appears.

It's 50°C on 500D (in ML menu I got 49°C just after the warning disappeared).

The heat was entirely generated by body, I run on AC adapter in place of battery with calm ambient.
#18
Feature Requests / LCD brightness: auto
September 06, 2013, 02:27:59 PM
Dealing with hardware, which - well, do things with light, I really miss the feature of automatic adjustment of LCD backlight. Of course the lens can be closed, one might meter the light of some deep cave in shiny bright day, but many situations are perfectly suited for this to happen; being outdoor we need bright LCD to be seen in full sun, or dimmed when it's night. I often find myself blindly increasing backlight barely seeing the display.
#19
Feature Requests / Thermal throttling profile
September 06, 2013, 02:20:39 PM
Today morning after a few hours spend playing with ML (no real photo) I got the high temperature warning. It would be nice if the most power hungry features could turn themself off automatically in such conditions (I'm not sure which one, but probably waveform, vectoroscope, zebras other than fast, maybe entire LV, dimming LCD etc.) It would be even better, when implemented as low-power profile that comes with some rational presets (made by people who know what drains power - I do not) and could be entered manually (or automatically on low battery condition).
#20
General Development / Re: String localization?
September 06, 2013, 01:42:10 PM
Quote from: dmilligan on September 05, 2013, 09:07:26 PM
Beacuse you were suggesting using gettext as your implementation which does do this at runtime

Native (original) strings are required anyway for missing translations. I gave gettext example mostly against using SOME_MAKRO_STRING instead actual text, and if it suitable itself or needs own (probably much simplified) implementation is a different question.

QuoteThis sounds like it would require modifiying the ARM compiler, I don't see how that is KISS.

It just needs some perl 30-liner preprocessor to be run on source tree at most (i.e. if there is real need for folding these constants to save memory), so that's not the real issue here.
#21
General Development / Re: String localization?
September 05, 2013, 07:02:01 PM
Quote from: dmilligan on September 05, 2013, 06:11:11 PM
But then if you wanted to simply change the English wording a little bit, you'd have to change the file for all translations even if you don't need to change the actual translation.

That's the point - how do you know, that you don't need to change translations to the languages you don't know? If you are sure (like fixing typo) nothing prevents you from using some one-liner (sed, perl, whatever you know) and correct all the translations in 3 seconds. On the other hand, if some function behaviour has been changed and you only fix original MACRO text, you'll end up with misleading or simply wrong translations.

Let me repeat: this is not my idea, but something that is used all around the world: gettext library. If there were any reasons to make it work this way, sooner or later you'll step into the same issues. Instead, KISS and do not reinvent the wheel.

QuoteAnd you have a big performance hit, b/c you'd have to have the english version of the strings in memory and whatever translation you are using, and you'd have to do string comparisons to look up the appropriate translated string.

Not necessarily - that depends on implementation only. You could prepare lookup table during build time (replace actual strings with identifiers, move text to separate loadable file the same as translations). Why do you insist on doing manually thing, that could be handled in preprocesing?

Another thing is: what shall be displayed for missing texts, when you load partial translation? Or don't have any translation file at all?

QuoteAnother potential advantage of doing the way I described is that you could easily create a "minimal" translation that doesn't include any help texts and uses short abbreviations for power users who are familiar with ML, minimizing memory usage, and freeing up as much as possible for other things.

And why couldn't you do the same with 'my' way? I see no difference here. Is it true at all, that these strings have so serious memory impact?
#22
General Development / Re: String localization?
September 05, 2013, 05:05:13 PM
Quote from: a1ex on September 05, 2013, 03:26:40 PM
I don't want to declare macros for every single string, and then have them in a separate file. Reason: you can no longer understand what exactly the program is printing there without having two files open.

So don't declare any macros, use original (english) string as a value - to be replaced when localized version is available, wrapped by appropriate function:

printf (_("Foo"));

This way one can easily disable entire i18n at build time by defining _(s) as simple 'return s' (if i18n would bring some performance penalty).
#23
General Development / Re: String localization?
September 05, 2013, 01:59:30 PM
Quote from: a1ex on September 05, 2013, 08:51:09 AM
Who's going to update it when some string that is not used daily gets changed?

Somebody familiar with either gettext tool or https://www.transifex.com/ platform. String with no translation remains in original language (english) until Someone(TM) cares. If noone cares then, well... it doesn't matter:) Entire linux ecosystem uses loadable translations (po files or qm for Qt apps), the ones that are not maintained simply become obsoleted.

QuoteAlso, a new class of bugs will appear (mostly alignment issues, cropped strings, special characters... or increased memory usage for languages for which 255 characters is not enough...)

As for charset, most western languages can be easily transliterated to base latin alphabet. Such conversion, lenght checks etc could be done at compile time. As for the Asia - at least romanization of russian alphabet is also easy and standarized (ISO 9 or GOST 7.79/B). ASCII is enough to satisfy most of the people.

QuoteI'm still surprised when finding stuff broken for 1, 2, 3 months (e.g. RGB spotmeter) and nobody notices it, for example.

That's because ML with this bug was not released. Most people are scared of 'hacks' itself, bleeding edge nightly builds are usually for developers and feature-hungry users.
#24
Feature Requests / Re: RAW to JPEG in-camera conversion
September 05, 2013, 12:27:06 PM
Understood. However - assuming it is possible to use hw this way, maybe it would be worth to expose appropriate functions to the scripting API? This way enables e.g. file managers to handle various DIGIC-assisted tasks (I guess there are some that could find interesting usage). For example: would it be possible to use face detection to implement 'zoom to faces' in playback mode?
#25
Hello everyone,

at the beginning - I am aware that this request might look like breaking "don't do anything that can't be done in postprocessing" rule, but this feature would have at least one real-life usage: freeing storage space when one runs out of it being out of civilisation. It already happened to me that I had to choose either to delete some of the already taken photos, or won't shoot any new. In such position many would happily sacrifice raw possibilities of some photos if only they could be preserved in JPEGs.

I've searched forum and bug tracker for similar request and found only off-line raw movie compression, unfortunately not possible. I hope in this case it is possible to feed raw data from storage directly into the DIGIC to perform hardware (i.e. fast) JPEG compression. Once again, I don't wish that option to be some introduction of in-camera fancy photo editor, but solely to give users comfort of shooting raws without worrying much about their size, as some of them could be shrinked afterwards (probably when there's more time to review).

Best regards and thanks to all the contributors and developers!