Apertus Axiom Beta

Started by Andy600, May 09, 2014, 01:02:28 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Andy600

This slipped under my radar but Apertus announced their Beta/crowd funding intentions and it looks like a great deal for early adopters to help develop the system. https://www.apertus.org/2014-NAB-Axiom-Beta

Ok, it's not Magic Lantern related (yet  ;D) but they have referenced Magic Lantern when describing the Axiom's open source credentials. Definitely worth a look and worthy of ML input.

Plus there's an indirect dig at (I presume) Blackmagic in the write up  :P
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Sebastian

Thanks to your work on magic lantern people now understand why an open source firmware is important for filmmakers and the creative potential and technical capabilities of a device.

Countless times I have been asked why DOPs should care about the camera they are using being "open", they say they are not programmers so they wont do anything with the code anyway....

I hope to see many Magic Lantern developers/users in our AXIOM Beta early adopters community. We would love to hear your feedback and incorporate your ideas into the AXIOM Beta hardware/software. Since everything will be open and well documented I know its not as much of a challenge as when you have to blink out and de-compile Canons firmware but maybe it's still fun to focus more on the creative high level features :)

PressureFM

This is a really interesting project, Sebastian.

Any reason why you have opted for an external recorder, instead of having internal storage over PCIe? Simplicity for the Beta?

It just seems weird having onboard MicroSD card and not something faster.

The small size of the Apertus Beta seems kinda overshadowed when you need to hook it up with external recorders and whatnot.

Update: Ah ... I could just read what it says on the page. It's because of the Microzed board I suppose.

Luiz Roberto dos Santos

[message merged on this topic]
Hi,
A crowdfunding as lunched to give start to a project of Open Hardware. The goal is to produce the Axiom, a camera that records in 4k 60fps on 15 f-stops of DR.
I'm Just 'marketing' because I follow this project grow since last summer, and I'm also involved with the GNU community, and the idea is pretty cool. The quality is equivalent to an Arri Alexa, but much cheaper, open community, and no-frills of "big brands", such as patents, licenses and other restrictions that limit your freedom. She has no restriction, including mounts can be exchanged, are not fixed, which allows you to use lenses, eg, Panavision, RED, Cooke without stress, on the same camera.
All hardware implementation is based on FPGA reprogrammed. I't run ArchLinux by default, on dual core ARM processors.
It is worth taking a look, folks:

https://www.indiegogo.com/projects/axiom-beta-the-first-open-digital-cinema-camera



Here is a video of the results (of course, this is just samples without post-production):



KurtAugust

Sebastian,

While it indeed is something completely different than taking a mass produced camera and making it do the most incredible stuff that wasn't even thought of or deliberately held back, I certainly am hoping to see the same spirit with Apertus as we are seeing here.

The mere thought of a camera company giving you all the functions it CAN give, not only the ones it wants to give at a certain price point is really exciting.

If all the people that are using Magic Lantern would donate you 1 euro, even it were only to make a statement to the industry about opening standards, your funding goals would be met quite soon! (good that you accept paypal!)

Is there any change of overlapping functions? A lot of postprocessing tools have already been made. How about supporting the MLV codec?
www.kurtaugustyns.com @HetRovendOog

Sebastian

Glad to see you talking about our campaign in here.

If some Magic Lantern users would be willing to support us that would be much appreciated!  :D

I do not see any problems supporting MLV codec on the Beta if there is some interest from users.
In a first stage we focus on the image acquisition part only and so we do external recording only but its just a matter of time until we get to the point to create an internal recording module.

g3gg0

hi sebastian.

i just want to say that i am willing to provide help regarding MLV support in-camera.
just tell me what you need - detailed explanation, just some code or another developer ;)
i am from germany, so its relatively easy to cooperate i guess. (your team looks german)

BR,
georg
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

KurtAugust

The image sensor seems the weakest link to me. It needs more DR. Too bad hardly any chip manufacturer want to be open and publish the specs. Going through all this trouble and ending up with a thin image may stand in the way of succes. Well, that and seeing how long it took for Red to come out of an actual state of beta. Or Blackmagic with some very weird policies and PR calling any problem with the camera a moot point. It seems so incredibly difficult to get it right. I really hope you succeed, but you'll need some time!
www.kurtaugustyns.com @HetRovendOog

pds

The CMV12000 chip has the specs online through the supplying Company (CMOSIS) Website I believe.

KurtAugust

I know, I know. Funniest thing is the chip manufacturer seems to be only a couple of streets away from my house. Anyway, I took the effort to download the footage (http://files.apertus.org/AXIOM%20Alpha%20Sample%20Footage%20Selection%20Ungraded.mxf) and found it quite problematic to work on. Very little color information, crushed blacks and whites. There's a big difference between creating a chip for machine vision and ending up with a pleasing natural image (no wonder the Alexa is a big box). There is indeed fixed noise (they've said that), but I also softened the image a bit because of the fine detail messing with the RGB pattern of the chip. Still prefer the 5dIII shooting MLV for now and how old is that camera?

I'm definitely rooting for them (although my financial contribution was rather symbolic) but they will need all the help they can get!

(I also wonder about the fan on top of the beta model of the camera)

In my cc test I can't get further than this and I don't like that. It looks so graded.



full res: http://www.blog.kurtaugustyns.com/wp-content/uploads/2014/09/AXIOM-Alpha-Sample-Footage-Selection-Ungraded-0014007b.jpg
www.kurtaugustyns.com @HetRovendOog

g3gg0

@Sebastian:

do you have a sample raw frame and some details how the bits are packed?

OT: are you from munich?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Sebastian

Sorry for the delay, returned from IBC yesterday night :)

I am located in Vienna.

Another developer would be amazing :)

For still image storage we currently use a custom camera format that basically just writes the raw bits of each pixel in a 16 bit sequence, details about RAW16 as we called it are here: https://wiki.apertus.org/index.php?title=RAW16
There is also the derived format RAW12 which as the name suggests just packs 12bit sequences as the CMV12000 can only go up to 12 bits.
Tuning the image pipeline is definitely some work ahead but hey if we succeed we can do it together and not wait until some company does it for us :)

Some images in RAW16 format are available here: http://vserver.13thfloor.at/Stuff/AXIOM/ALPHA/RAW/

Image Sensor datasheet is on Github: https://github.com/apertus-open-source-cinema/alpha-hardware/tree/master/Datasheets


KurtAugust

Visited IBC today. Another camera manufacturer confirmed he was using the cmosis chip as well. Looked good. They had in a more industrial housing and all the kinks worked out by now. Are Blackmagic and Aja using these as well?

(Aja footage can look much better than what they uploaded to the internet, btw, I've seen a much less contrasty version)
www.kurtaugustyns.com @HetRovendOog

g3gg0

@sebastian:
for raw video i recommend to implement a variable bit depth in your zynq FPGA domain.
it is much more comfortable to reduce bit depth than reducing frame rate.

iirc canon format is like that:
you get a 14 bit data stream from the ADC hardware path, MSB leftmost.
so assuming, you get pixels A, B, C and D with their corresponding data bits aaa..., bbb... etc, the data stream looks like that:
aaaaaaaaaaaaaabbbbbbbbbbbbbbccccccccccccccdddddd...
leftmost 'a' is the MSB, rightmost the LSB

this data is shifted into a 16 bit wide shift register:
aaaaaaaaaaaaaabb  bbbbbbbbbbbbcccc  ccccccccccdddddd ...

and data is stored into memory through a 16 bit write on a LE machine.
so the resulting byte stream looks like that:
aaaaaabb aaaaaaaa bbbbcccc bbbbbbbb ccdddddd cccccccc ...

if you implement this in FPGA logic, most of the MLV effort is already done.


then you only have to packetize the payload of the video frames into a VIDF block per frame, update the
timestamp field with a 1µs accurate timestamp and write the data into a file.
before the first VIDF, there must be at least a MLVI (file header) and a RAWI (raw format description) block.

so the minimal solution:
MLVI, RAWI, VIDF, VIDF, VIDF, ...

then you can process the .mlv files with most of the tools.
after that, you can increment the number of blocks with additional information.
e.g. adding EXPO, LENS, RTCI, INFO, IDNT etc blocks to store all possible data.


some other idea, did you think of adding a gyro to your device?
then sample it every 1 ms and store a trace of camera inertial movement (3 axis, 3 rotation rates) along with every single frame.
this data can be used for offline deconvolution of shaky images. (ok assuming you have mainframe for the calculation...) ;)


how can i help you with development?
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Bertl

@g3gg0:
Thanks for the information!

regarding gyro: the Axiom BETA will feature an 9DOF IMU (probably more than one chip), and we are looking forward to evaluate it for image stabilization and other interesting ideas.

regarding help: probably best with documentation and/or examples.

Thanks,
Herbert

KurtAugust

Apertus,

With the new campaign update*, am I the only one thinking about a mode to record all the pixels of the sensor, a bit like the Arri open gate mode (if I am correct?)? Anarmorphic, ... , reframe, lots of possibilties. Everything is possible if you control the hardware, right?

Also, this camerahead combined with the Atomos Shogun, could make a very lean camera package. (http://www.atomos.com/shogun/) Interesting!

*which comes at a perfect time and should give you a boost in funding.
www.kurtaugustyns.com @HetRovendOog

philippejadin

KurtAugust,

You are right, since everything will be open source, you'll be able to design any type of recording you wish. Could well be anamorphic, crop parts of the sensor, etc...

The limitations here are not artificial (for example dictated by the marketing department) :  they are based on the skills and interest of people working on the camera. As always in open source, if your desired feature is acclaimed by a lot of people it will happen sooner. Anamorphic shooting seems like something a lot of people will want.

Atomos Shogun is definitely one of the possible options for recording.

a1ex

Some of you wondered about the sensor, so I did a quick analysis, from a few samples from Sebastian (Axiom Alpha, CMV12000).

This sensor has 4 analog gain levels: 1, 2, 3 and 4. I'll try to find out what ISO they correspond to, so you know how it compares to other popular cameras.

Hope you will enjoy pixel-peeping the dark frames :P

Response curve

Seems pretty much linear, with a tiny roll-off in highlights, and a strange shape in the right side (probably black sun correction).


These graphs contain a rough guess for the white level.

Black level
This one is tricky - it changes with gain, exposure time, temperature and... overexposed areas (!), and it changes a lot. The spec says "Dark current 125 e-/s (25 degC)". There is an optical black (OB) area of 8+8 columns, which may help (I didn't test it).

On Canons, black level is clamped to 1024 or 2048 or similar values, with minimal variations.

FPN and row/column noise

Showing dark frames downsized by averaging 10x10 pixels (this process reveals FPN).




Left to right: regular dark frame; difference between two dark frames (delta); delta after subtracting row average over 8+8 columns (first 8 and last 8 ); 40+40 columns; 200+200 columns; and - last image - after subtracting the average of every single row and every single column.
Top to bottom: gain = 1,2,3,4.
All graphs have brightness stretched after throwing away the 1% darkest pixels and 1% brightest ones. They show how uniform the noise is, not its magnitude.

After subtracting two dark frames (taken at a few seconds one after another), the image still has a really strong row noise (visually similar to FPN, but it's not fixed - it changes every frame). This one is hard to correct. CMOSIS recommends using the optical black areas for this, but they are way too small (8 + 8 columns) and their size cannot be changed. Subtracting their average from a dark frame fixes only half of the row noise. If you would be able to change the OB size to 40+40, this would reduce the row noise by about 2 stops, and a 200+200 OB would reduce it by 3 stops. Additional difficulty: the magnitude of the noise is different in the center of the image, compared to the edges.

The only solutions I can imagine are:
- submit a feature request to CMOSIS to change the OB size :P
- a really clever image processing algorithm (way beyond my skills)
- some hope for this noise to be correlated in adjacent frames.

Subtracting some value from every row and every column of the darkframe delta (so, 4096 + 3072 parameters) is enough for fixing the random row noise - the hard part is finding these parameters, because they change for every single frame. Here's how the dark frame deltas look after subtracting the average of every row and every column (gain=1..4):


But subtracting some value from every row and every column of the dark frame is not enough - one has to average a large number of dark frames and store it as calibration data. Here's why (gain=1..4):


SNR and dynamic range

Using this method (measuring the noise between two images and decomposing it into dark noise and shot noise), I've got the following results:


Plotting these results on the ISO/DR graph, assuming a quantum efficiency = 60% (source), gives:


So, the base ISO on this sensor is about ISO 400, on the scale used by DxO. Canon ISOs are evaluated by DxO around 80, 160, 320... usually 1/3 stops lower than what you choose in menu. The sensor seems on par with the Nikon 1 V2, regarding dynamic range and low light behavior.

To find the SNR curves, I've subtracted two images of the same static scene, fixed the row noise, and sampled 17x17 patches randomly (5000 patches). To find the signal level (patch average), I took the average of the two images, and subtracted a dark frame (because of black level issues). I've measured DR from the point where signal=noise (SNR=0EV) until the clipping point, assuming the sensor response is linear (didn't do any linearity correction). More details in the ISO research thread.

The data set for gain=1 had some trouble with black level, so I did some guesswork here (adjusted it manually until it looked good).

I did not consider the infrared blocking filter - depending on how strong it is, it would shift the graph to the left side, effectively lowering the ISO.

Will binning to 1080p improve the dynamic range?
Yes. On this sensor you could do a 2x2 binning, which will improve the DR in shadows by up to 1 stop.


Binning is best done in the digital domain (software) - in this case, it will average all the noise sources. If it's done in analog domain, as on Canons (that is, before the read noise gets introduced), only the noise that appears before the binning circuit will be reduced.

Example: the 5D3 uses 3x3 binning in video mode (no pixels discarded), while the 6D uses 1x3 binning (with line skipping, discarding 2/3 of pixels). Yet, the 6D manages to have lower noise than 5D3, at all ISOs.

How accurate are your SNR/DR/ISO measurements?
I don't know, you be the judge. The measurements were not done in a laboratory - I simply asked Sebastian to point the camera at a bright window, include something black in the frame, and make sure the lens is really out of focus.

Repeatability is quite good - after running the test on two 5D3 cameras by two different people and two different test scenes, the results overlap quite well (see here).

If any step from the testing procedure is unclear, just ask.

Some double-checks:
- measured gain ratios match the programmed gain ratios very well:
  5.65 / 2.85 = 1.98, ideally 2
  2.85 / 1.94 = 1.47, ideally 1.5
  1.94 / 1.44 = 1.35, ideally 4/3
- measured dark noise at higher gains (10.15 electrons) matches the number quoted here (10 electrons).
- measured full well and dark noise at lowest gain matches the spec somewhat (13384/13.60 vs 13500/13).
- measured gain does not match the spec (e/DN), figure out why

What do you think about the  HDR modes?
Axiom list 3 HDR modes on their website:
- PLR: somewhat similar to RED HDRx, combining 2 or 3 exposures in hardware (uses shorter exposure time in highlights). The spec say you can get up to 15 stops in this mode, but I think you might be able to push it even more. Didn't try it, but here's an opinion from somebody who did: http://www.dvxuser.com/V6/archive/index.php/t-299568.html
- "dual shutter" - similar to dual ISO; my converter is likely to work with minimal changes.
- alternate shutter every frame - similar to ML HDR video, with more potential because of the higher FPS.

All these modes are basically using a faster shutter speed in highlights - however, they will not improve the shadows. Therefore, they are equivalent to lowering the ISO - I expect it to reach crazy values like ISO 10 or maybe even lower.

All of these modes will have some motion artifacts. I don't know how bad they are, but I expect them to be not as bad as ML HDR video.

Will Dual ISO work?
Even if you find a way to configure the sensor to scan at two different gains, the improvement would be log2(13/10) = less than 0.4 stops. Not worth the effort.

What about long exposures?
Here we have a huge advantage: on this camera, we have access to the FPGA, which is much faster at image processing than the general-purpose ARM processor. One could capture short exposures and average them on the fly, which has a very nice impact on the dynamic range.

How much? Stacking N images will improve shadow noise by log2(sqrt(N)) stops - so averaging 64 images will give 3 more stops of DR, just like that. Assuming the hardware is fast enough to average 4K at say 100fps, a 10-second exposure could have a 5-stop DR boost. Without motion artifacts or resolution loss.

Where's the catch?
Read noise (in electrons) does not depend much on exposure time (on 5D3, the noise difference between 1/8000 and 15" is minimal). Therefore:
- A short exposure would capture P photons with read noise R. Adding N frames would give N*P photons with read noise R * sqrt(N).
- A long exposure would capture N*P photons in a single frame (clipping N times earlier), with read noise R.

So, a stacked exposure, compared to a long exposure, would give:
- log2(N) stops of more highlight detail (think of it as if it were a lower ISO)
- log2(sqrt(N)) stops of more dynamic range
- log2(sqrt(N)) stops of more shadow noise (in electrons)

=> there's no free lunch. It's great for replacing a ND filter, but it's not ideal for astro.

At very long exposures (hours), things may be different - such a long exposure may no longer be as clean as a short one, or it might simply clip too many highlights. I don't have any experience with astro, so I'm just guessing.

Yes, I'm going to implement this in Magic Lantern as well - we have just found a routine that adds or subtracts two RAW buffers on the dedicated image processor (without using the main CPU).

This sensor can do 300 fps. Can you do the above trick for video?
(credits @anton__ for this idea)

The sensor does not do 300fps at maximum resolution, but I guess it can do this at 1080p (even with hardware binning). Note, this is pure speculation. If the Axiom hardware is fast enough to add 1080p frames at 300fps (I have no idea if it is), you could create a 1/50 exposure (180-degree shutter at 25fps) out of 6 frames captured at 1/300. This means 1.3 EV boost in DR, at the cost of 1.3 EV of shadow noise (it will require more light). Good for emulating a low ISO, without motion artifacts like the other HDR modes.

Again - I don't know if the hardware is fast enough for this.

On the ISO/DR graph (scroll up) I've plotted what would happen if you would average 4 frames.

What about underclocking the sensor?
I don't know. If you can try it, feel free to send me sample images.

What about the other sensor - KAC-12040?
It has 12 stops with rolling shutter (3.7 electrons), and 10 stops (25.5 electrons) with global shutter, but it's a little smaller (4.7μm vs 5.5μm). I don't have any samples from it, so can't tell much, but judging from the specs, it's better in low light by about 1 stop (of course, in rolling shutter mode). If you have it up and running, feel free to send me some sample files.

It has analog gain (values mentioned in datasheet are <1, 2 and 8 - whatever that means). I don't know how much it improves the dark noise.

The datasheet hints that dark noise might get better at lower LVDS clock. Cool.

I've placed this sensor on the DR graph based on datasheet values (full well 16000, dark noise 3.7 or 25.5, max QE 47%, 4000 horizontal pixels, active width 18.8mm).

What about the CMV20000?

In low light it's probably as good as the 60D, judging from the specs.




My conclusion

Global shutter and 300fps are not free - you lose a little low light ability, but not much.

CMV12000 - full resolution
The base ISO is about 400 (maybe lower once you attach the IR blocking filter), and it goes to about ISO 1250 with analog amplification. At high ISO, the noise improves by only 0.4 stops. If my math (and also DxO and sensorgen's math) is not screwed up, this sensor is on par with the little Nikon 1 V2, and about 1 stop behind a Canon 60D.

The row noise (banding) is a major problem, and I don't currently have a solution for it.

CMV12000 - 1080p
I didn't test this mode, but with proper 2x2 binning (without introducing additional noise), this sensor would catch up (but that's because Canon does poor downsizing). At its base ISO, it's really close to the 5D3 in 1080p RAW at ISO 800 (note that 5D3 ISO 800 in LiveView is more like DxO ISO 500). In low light, it's only 0.5 stops behind the 60D in 1080p RAW (1734x975), and about 2 stops behind 5D3 in 1080p RAW ISO 6400.

Not bad for a very fast sensor with global shutter. And there are tricks to squeeze even more DR, without motion artifacts, like averaging 4 or 6 frames, or exploiting the black sun correction to squeeze more highlights.

KAC-12040
For low light, in rolling shutter mode, the smaller sensor is better on paper, despite its smaller size (I expect it to be about 1 stop better). I didn't test this sensor, so I've only plotted what I could figure out from the spec - but there are hints that low-light performance might be even better.

I believe this sensor will be very similar to GH4 in low light, and with a bit of luck, on par with 60D (which is 0.5-stop better than GH4).

In global shutter mode it's not that good for low light - you'll get better results from a smartphone (Lumia 1020).

But hey - you can choose between global shutter and low light, without swapping the sensor!




Hope the above helps you choose which sensor is best for you.

Anyway, if the sensor proves to be the weak link, it can be exchanged. Feel free to suggest better options - for example, if you can ask Sony about Exmor sensors, please do.




Raw samples: http://footage.apertus.org/AXIOM%20Alpha%20footage/ML/
More graphs: https://www.dropbox.com/sh/xarr1i7vm7cwevd/AAA8OydI3VNZOTPQ-7Ta5aPRa?dl=0
Octave scripts on request.

KurtAugust

Thanks for doing these measurements! Really interesting to study the graph with DR and to see how in general Canon catches up in DR on the Nikon in the higher ISO.
If I read the graph right, I'm a little disappointed in the Apertus chip. Even with HDR it scores worse than the Canon 5D3. Unless we theoretically use it at ISO 50. Is there really not a good DR chip out there? I don't care so much for resolution or size, I care more about color and sensitivity. The fixed pattern noise can be fixed, that what another camera maker using the chip told me. But they had invested much effort in it.

Also, quite worrying that Apertus are still 40k short of getting their program funded. At this point, it don't see it happening in its current form. Does this prove the big companies right, that we should pay big bucks to get the big tools?

(Combined with some traction for promising freshly announced camera's from Sony, Panasonic, Aja, etc)

And points for being clever to Magic Lantern, to build on hardware that already exists and is readily accessible?

It would be interesting to see Arri and Red chips plotted on there, just as a reference.

(edited because I needed to take in the info a bit more)
www.kurtaugustyns.com @HetRovendOog

PressureFM

Very informative!

Hope the project gets off the ground because ML have really shown why an Open Source environment would be great for the users.

a1ex

Quote from: KurtAugust on September 27, 2014, 08:32:20 AM
If I read the graph right, I'm a little disappointed in the Apertus chip. Even with HDR it scores worse than the Canon 5D3. Unless we theoretically use it at ISO 50.

Correct - but I'm afraid this is the price to pay for global shutter. I don't know much about the internals, but the other sensor (KAC-12040 ) gets much better in DR (and low light) once you switch to rolling shutter mode.

Quote from: KurtAugust on September 27, 2014, 08:32:20 AM
Is there really not a good DR chip out there?

I'm not an expert in this topic, but here's a hint from Samuel H (he also experimented with the CMV sensor, and probably researched the market a bit):
Quote
And then: Sony has a very limited range of readily-available sensors, nothing bigger than 2/3" IIRC. They sell bigger sensors to Nikon and Pentax, but won't talk to you unless you can put a few million dollars in the table and sign a contract that assures them you'll buy A LOT of sensors. And even then, I doubt they'll give you the F55 sensor. Maybe the one on the NEX-5N, but that one has to do line-skipping to record video.

But maybe somebody knows how to ask them nicely :D

Here's is a hint from jrista from canonrumors forum: http://www.canonrumors.com/forum/index.php?topic=22150.msg442597#msg442597
Quote
I have an astro camera with a CMOS sensor from Aptina that has 120dB of dynamic range using an approach similar to ML's dual ISO. That's 20 stops.

But looking at their page: https://www.aptina.com/products/image_sensors/
the largest ones are AR1011HS and AR1411HS. I've found some data about their performance here:
AR1011HS: http://image-sensors-world.blogspot.com/2014/04/aptina-aims-its-1-inch-and-12-inch-4k.html http://www.jstage.jst.go.jp/article/mta/2/2/2_95/_pdf
AR1411HS: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6578704&tag=1

I've plotted them on the graph, based on the above data:


but I'm afraid the KAC-12040 is still better, both in DR and low light. Note my KAC plot is incomplete (the datasheet only mentions the base ISO, but it does have analog gain - so it's likely better than the dotted black line).

Also see this: http://www.leti.fr/en/content/download/1184/18110/file/01_CMOSIS_L%20De%20Mey_%20Leti%20280611-1.pdf
(CMOSIS appears to have plans for better sensors, but I have no idea when they will be ready)

Sebastian

Many thanks for taking the time to measure all this, its indeed very interesting.

About row noise please check the application note from CMOSIS I emailed you (a1ex), they suggest applying a reference voltage to some additional sensor pins and report a significant improvement in row noise behavior.

About PLR:
I also think it can go beyond the officially listed 15 f-stops - I ran tests with +6 F-stops and while the footage was not without flaws in general it worked and definitely looked promising.
I noticed though that color saturation decreases with the amount of PLR highlight recovery and the fixed pattern noise is getting stronger in those areas. Both effects might be possible to compensate in internal image processing. Maybe we can measure/benchmark those settings as well in the future.

Here is a quick +6 stops test where I varied the lens aperture to measure latitude differences before turning on PLR:



PLR HDR vs HDRx:
As I understand it (I have not personally shot any HDRx footage on Red cameras) the differences are:
-) HDRx saves 2 exposure bracketed image streams and you can mix them together with different algorithms in post.
-) PLR HDR combines 2/3 (selectable) exposure brackets and mixes them together based on luminosity threshold on the sensor as the images are gathered already. The output is a single image stream - no post processing required to combine exposures.
-) with PLR highlights loose light sensitivity - shadows keep their light sensitivity - the curve can be tweaked in all aspects.
-) with PLR flickering can occur in highlights created by flickering light (AC tungsten, flickering CRT screens, magnetic ballast fluorescent tubes) as PLR reduces exposure time in highlights, but PLR actually does up to 3 exposure phases in one normal exposure time (not a single short one) so the effect might not be that visible -> will require more tests (I have not seen flickering in any PLR footage I shot yet - but I used PLR tweaking mostly with bright sunlight which will obviously not start to flicker :) )


About max FPS
The CMV12000 V2 can indeed go up to 300 FPS at full 4096x3072 resolution in 10 bit mode, in 12 bit mode this max FPS is reduced to 180 FPS according to official specs. By reducing the number of read rows (smaller window -> e.g. reading a 16x9 window from the 4:3 sensor) the max FPS will increase beyond the 300/180 FPS.


Other Sensors
Canon doesn't sell their sensors, Sony only sells to partners or large volume orders and they only sell a small selection of the sensors, not their latest cinema ones. There are only a few companies who sell large diameter sensors beside the mentioned Kodak/Truesense, Cmosis, Aptina that are OnSemi (VITA12/16/25 series) and dynamax imaging which all offer similar specs for similar prices.
The biggest factor for us though is that the sensor datasheet can be shared without an NDA and that only Cmosis and Truesense agreed to. It is essential that developers and the community have access to this documentation and we will not incorporate any image sensor that has no open documentation. After all these measurements as a1ex did them would have been pretty much impossible otherwise.

a1ex

Quote from: Sebastian on September 27, 2014, 01:12:47 PM
About max FPS
The CMV12000 V2 can indeed go up to 300 FPS at full 4096x3072 resolution in 10 bit mode, in 12 bit mode this max FPS is reduced to 180 FPS according to official specs. By reducing the number of read rows (smaller window -> e.g. reading a 16x9 window from the 4:3 sensor) the max FPS will increase beyond the 300/180 FPS.

That's really impressive. You mean, it can get 840fps at 1080p lines, and 1900 fps at 480 lines?!

That would be downright crazy; I bet g3gg0 can't wait to play with this sensor :P

(me too)

janoschsimon

UH those fps possible with axiom beta? HMMMMMM nice :D

PressureFM

Quote from: janoschsimon on September 27, 2014, 03:55:18 PM
UH those fps possible with axiom beta? HMMMMMM nice :D

They are talking about the sensor itself. The Axiom Beta is limited by its hardware and HDMI output.