DIGIC overclock by ML?

Started by Luiz Roberto dos Santos, February 07, 2014, 09:56:04 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


Quote from: a1ex on February 07, 2014, 09:23:14 PM
I found it on some forum (don't remember the link), and Audionut mentioned it too.

It was my understanding that some Nikon stuff has a slow mode where the readout speed is reduced.  I've done a quick google search and couldn't come up with anything about that though.



QuoteThis paper presents a 400H×256V pixel CMOS image sensor including 128 on-chip memory/pixel with 1Tpixel/s in burst operation without cooling and 780Mpixel/s in continuous operation. To improve the read-out speed from the chip, a noise-reduction circuit in pixel and relay buffers is introduced.


QuoteThe settling time of an amplifier or other output device is the time elapsed from the application of an ideal instantaneous step input to the time at which the amplifier output has entered and remained within a specified error band, usually symmetrical about the final value.


QuoteThe results of the extended tests done on the pixel array with and without exposure to a Fe X ray source (energy peaks: 5.9 and 6.4 keV) are presented here...........................Although the chip was optimized to work at 100 MHz clock frequency, it is important to check the functionalities of the chip at lower frequency..........................Fig. 2(a) indicates that the temporal noise remains stable up to 100 MHz clock frequency. The FPN behaves similarly with a sharp increase above 100 MHz Fig. 2(b). This is satisfactory as the chip was optimized for 100 MHz clocking.


QuoteIn CMOS image sensors, generally there  is a trade  off between high speed imaging and high image quality.  This is because high speed imaging brings high power with high thermal noise which also degrades the signal quality, and because fast data transfer may require more output pins, bringing more interference.


QuoteConventional CMOS sensors implemented noise cancellation using analog CDS circuits, and furthermore provided digital outputs by integrating A/D converters on the same chip.

From what I understand, Canon falls into this conventional category.

This google searching makes it quite clear that there are various CCD devices that allow the user to control readout speed.  One such example.


QuoteThe user specifics the readout speed (pixel rate) and integration time in either line binning
mode (1D) or area scanning mode (2D).

On the topic of read noise.


QuoteWith sCMOS, the structure of the sensor inherently has more pixel variation, and the extreme low noise of the sensor makes variation more statistically significant. So when it comes to evaluating camera performance, the truly meaningful spec is rms noise.

Maybe RMS instead of MAD.

Oh and pixel binning sounds like it might have a use.


I knew I read it somewhere.


QuoteA qualification is in order here -- the Nikon D3 and D300 are both capable of recording in both 12-bit and 14-bit modes. The method of recording 14-bit files on the D300 is substantively different from that for recording 12-bit files; in particular, the frame rate slows by a factor 3-4. Reading out the sensor more slowly allows it to be read more accurately, and so there may indeed by a perceptible improvement in D300 14-bit files over D300 12-bit files (specifically, less read noise, including pattern noise). That does not, however, mean that the data need be recorded at 14-bit tonal depth -- the improvement in image quality comes from the slower readout, and because the noise is still more than four 14-bit levels, the image could still be recorded in 12-bit tonal depth and be indistinguishable from the 14-bit data it was derived from.


a guy i know is programing a lens via usb  somthin to do about limits on new electronic lenses, and replacing chips.