AI algorithms for debinning

Started by mlrocks, July 17, 2021, 02:58:18 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

IDA_ML

Very interesting and useful discussion, by the way.  I hope, other experienced people will jump in too.

Levas

Interesting discussion indeed, Ida.
Always interesting to see how others look at image quality and do their post process.

Quote from: mlrocks on July 18, 2021, 08:23:31 PM
I start to believe that the anamorphic 5.7k raw on 5D3 is a true 5.7k raw with a compression ratio of 6 or above, as the horizontal binning has a compression ratio of 3 or above, and the 14-bit lossless LJ92 has a compression ratio of 1.5-2.5, depending on scene complexity and ISO level. The image quality may be "mediocre" comparing to the native uncompressed 6k raw, but should be much better than 1920x2340 if AR is 2.40.

Currently, Red Komodore has a 6K raw with compression ratio choices of 3 to 12, the same as the BMPCC 6K Pro. If implemented the same as the Canon R5, Canon 1DX3 and C500MK2 have 6k raw with a compression ratio of 6.
...

...
My bet is that all of these cameras will be similar in terms of image quality if not pixel peeping, as long as the operator is an expert on the camera and on the raw process to release the full potential of the system.

In terms of resolution, the 5d3 and other ML cameras will never win in anamorphic modes, on resolution/detail that is.
The other cameras don't use pixelbinning or lineskipping as technique to lower compression rates.
The biggest digfference between these professional video cameras and the 5d3 and other canon DSLR's is the readout speed of the sensor.
The ML cameras can't readout fast enough to read enough pixels for 4K.
For example, the 6d sensor can be read out at a speed of about 90 megapixel per second.
UHD 3840x2160 x 24(fps) = about 200 megapixel per second...so not possible on 6d (that's the reason why highest crop mode on 6d in 25 fps = 2880x1200 which is about 86 megapixel per second)

The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

But how these 6:1 compression ratio's work is a bit of a blackbox.
My bet is that most detail is cut off in dark(er) areas of the image.

Still also not sure how raw the BRAW of the blackmagic cameras is, or how raw the CRM files of the Canon 5dR are.
Blackmagic used to have raw cinema dng, which is as raw as raw gets.
But the more you read about BRAW, the more it sounds like a video codec, some say the image is already debayered in BRAW  ???
 
What is also a mystery to me is canon cinema raw light format (which is the only raw format in the Canon 5dR).
Cinema raw light has a high compression rate and there are no software tools available to extract a raw image sequence out of it.
Which is weird, if I buy a camera that shoots raw,  I'd like to be somehow able to extract an image sequence out of the files in a raw format,  like CR2, CR3 or DNG.
Doesn't matter if the CR3 or DNG's are lossy compressed, but I like to open them in different photo and video editors to compare different outputs.

That said ML raw has some pros against the 6:1 compression techniques.
ML raw gives you true 14 bit lossless raw compression. (or lower, like 10 bit if you want to, but still lossless compression)
So color detail and shadow area's are probably better compared against 6:1 compression techniques. 

IDA_ML

Thank you, Levas.  Could you please explain to me what 6:1 compression techniques are?  Is this some kind of lossless RAW compression or is it something else?

mlrocks

Quote from: Levas on July 19, 2021, 12:29:16 PM
Interesting discussion indeed, Ida.
Always interesting to see how others look at image quality and do their post process.

In terms of resolution, the 5d3 and other ML cameras will never win in anamorphic modes, on resolution/detail that is.
The other cameras don't use pixelbinning or lineskipping as technique to lower compression rates.
The biggest digfference between these professional video cameras and the 5d3 and other canon DSLR's is the readout speed of the sensor.
The ML cameras can't readout fast enough to read enough pixels for 4K.
For example, the 6d sensor can be read out at a speed of about 90 megapixel per second.
UHD 3840x2160 x 24(fps) = about 200 megapixel per second...so not possible on 6d (that's the reason why highest crop mode on 6d in 25 fps = 2880x1200 which is about 86 megapixel per second)

The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

But how these 6:1 compression ratio's work is a bit of a blackbox.
My bet is that most detail is cut off in dark(er) areas of the image.

Still also not sure how raw the BRAW of the blackmagic cameras is, or how raw the CRM files of the Canon 5dR are.
Blackmagic used to have raw cinema dng, which is as raw as raw gets.
But the more you read about BRAW, the more it sounds like a video codec, some say the image is already debayered in BRAW  ???
 
What is also a mystery to me is canon cinema raw light format (which is the only raw format in the Canon 5dR).
Cinema raw light has a high compression rate and there are no software tools available to extract a raw image sequence out of it.
Which is weird, if I buy a camera that shoots raw,  I'd like to be somehow able to extract an image sequence out of the files in a raw format,  like CR2, CR3 or DNG.
Doesn't matter if the CR3 or DNG's are lossy compressed, but I like to open them in different photo and video editors to compare different outputs.

That said ML raw has some pros against the 6:1 compression techniques.
ML raw gives you true 14 bit lossless raw compression. (or lower, like 10 bit if you want to, but still lossless compression)
So color detail and shadow area's are probably better compared against 6:1 compression techniques.


Very nice discussion.
It seems to me that 5D3's raw cannot handle darker areas well. I once accidentally underexposed my footage, after elevated the exposure in MLV App for about 3 stops, the noise made the footage not usable. Yet 5D3's low light performance is pretty good in photo mode. I used 10-bit 14-bit lossless compression, maybe this caused the loss of the shadow details?
Also I did not know why ML histogram emphasizes ETTR, as 5D3 is strong in low light but not so good at high light. It seems that ETTR is a must for all of the ML cameras to retain shadow details.
In the same analogy, probably all those cameras with a compression ratio of 6 or above will have similar issues on shadow details?

Jonneh

Quote from: IDA_ML on July 19, 2021, 07:03:15 AM
If these are the imperfections that are bothering you, then, I would say, you are too demanding to your footage.

They aren't. Sorry if I didn't make that clear, although I did try to. But if you say "happy pixel peeping", I'm going to pixel peep! :D

Just as others prefer the overall impression of the anamorphic mode at standard viewing distances, I prefer the overall impression of the crop modes. I think the eye is good at picking up on things that don't look right, even if it can't see the artefacts themselves. I think that's what's going on in my case, but I need to provide some examples (away at the moment). When I watch masc's anamorphic stuff, I think it looks fantastic, so as is usually the case, problems have solutions, even if we haven't identified them yet.
5D3 / 100D

mlrocks

Quote from: Jonneh on July 19, 2021, 03:33:51 PM
They aren't. Sorry if I didn't make that clear, although I did try to. But if you say "happy pixel peeping", I'm going to pixel peep! :D

Just as others prefer the overall impression of the anamorphic mode at standard viewing distances, I prefer the overall impression of the crop modes. I think the eye is good at picking up on things that don't look right, even if it can't see the artefacts themselves. I think that's what's going on in my case, but I need to provide some examples (away at the moment). When I watch masc's anamorphic stuff, I think it looks fantastic, so as is usually the case, problems have solutions, even if we haven't identified them yet.

I agree with you. I think that if at the same resolution level, the crop pixel by pixel imaging is clean and looks detailed when enlarged to 100%. The anamorphic modes need special debinning process to get nice imaging at 100%. Probably this can be the future enhancement area and interesting topic for MLers.

Jonneh

Quote from: Levas on July 19, 2021, 12:29:16 PM
The 5d3 sensor has the fastest readout speed of all ML cameras, that's why it has higher resolutions available in crop and anamorphic modes, but still not fast enough to readout 3840x2160x24fps.

"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

At this point, attempting to push this limit, if that's even possible, doesn't so much lie in increasing resolution for its own sake; I'm sure most of us agree that resolutions above 2 or 3K give starkly diminishing returns in terms of the impression of quality, and 4K is plenty even for the big screen (since people adjust their angle of view, be it on a phone or in a 25m cinema). Steve Yedlin does a fantastic analysis of this claim:


https://www.youtube.com/watch?v=PaeasJiqLlM).


Rather, it lies in decreasing the crop factor while maintaining the image quality advantages of 1x1 modes over 3x3 or 1x3 modes (there for me in the case of 1x3; perhaps less so for other people). Not only does it look marginally better at full screen, it is also more tolerant of cropping, which gets lots of use, especially in more experimental cinema. If there are ways around the shortcomings of 1x3 modes, I'll be thrilled to find them, but I personally am not there yet.
5D3 / 100D

mlrocks

At the current stage, the anomorphic imaging has little difference from the same resolution native imaging.

In practical video shooting, there is constant panning, zooming, tilting if the camera is mounted on a tripod, or constant complex moving if the camera is mounted on a steadicam or a gimbal or a pair of tracks or a drone or a chopper. This generates all kinds of motion blur, easily surpassing the little difference in image quality mentioned above.
After the video is finished, during the transporting from the production studio to the end user audience, there will be continuous loss during each step. The best route is the commercial movie theater, currently typically with 2k or 4k 12-bit projectors, the loss is the least by this route. As we discussed before, considering the viewing distance of the audience, the IQ difference will not be seen by them.
If the video is for broadcasting, either through cable or by wireless, the bandwidth is limited, the loss of the IQ will be much more than playing back in theaters. The audience will not see the IQ difference we found here.
If the video is for youtube etc, or for netflix/amazon prime streaming, the heavy codec and the limited bandwidth will eliminate the IQ difference we talk about here.
Personally, I found out that viewing the computer monitor in full screen mode is the most demanding way, because the viewing distance is much closer than watching in theater or watching TV. Even by this way, the quality of the UHD anamorphic imaging is very similar to 3k native imaging.
In summary, if we look at the whole picture of the content generating and all kinds of the routes to provide the video to the audience, the small IQ difference at the camera level is just a short tree in a forest of confounding factors.

On the other hand, the debinning process is far from perfect in ML anamorphic modes. There is still room to improve.

mlrocks

Quote from: Jonneh on July 19, 2021, 04:20:37 PM
"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

At this point, attempting to push this limit, if that's even possible, doesn't so much lie in increasing resolution for its own sake; I'm sure most of us agree that resolutions above 2 or 3K give starkly diminishing returns in terms of the impression of quality, and 4K is plenty even for the big screen (since people adjust their angle of view, be it on a phone or in a 25m cinema). Steve Yedlin does a fantastic analysis of this claim:


https://www.youtube.com/watch?v=PaeasJiqLlM).


Rather, it lies in decreasing the crop factor while maintaining the image quality advantages of 1x1 modes over 3x3 or 1x3 modes (there for me in the case of 1x3; perhaps less so for other people). Not only does it look marginally better at full screen, it is also more tolerant of cropping, which gets lots of use, especially in more experimental cinema. If there are ways around the shortcomings of 1x3 modes, I'll be thrilled to find them, but I personally am not there yet.


There is a reason why Red One was so popular in 2008. The 4k raw recording actually was the standard to scan old super 35mm into digital intermedia (DI). There were a lot of intensive discussions on how much resolution a slide of super 35mm negative has and how to retain the most details into digital media. Now Hollywood is using 8k r/g/b raw to scan the super 35mm negatives, however, the gain may drastically diminish, as no one brags about the 8k single-channel/24k tri-channel scanning as much as 4k raw in 2008. It is generally considered that 4k raw is enough for the super 35mm resolution, higher resolution has little use due to the motion blur mentioned above. Just imaging this, can a photographer use a 30 mp high resolution camera to take a perfectly clear and detailed image without any blur when the camera is moving and when the subject is moving? If not, why bother with such a high resolution camera?

Jonneh

Quote from: mlrocks on July 19, 2021, 07:28:26 PMCan a photographer use a 30 mp high resolution camera to take a perfectly clear and detailed image without any blur when the camera is moving and when the subject is moving? If not, why bother with such a high resolution camera?

​I do agree with the thrust of your argument (diminishing returns). A couple of counterpoints:

I haven't looked into the extent to which motion blur annuls gains in resolution, but it's at least plausible that a streaking point looks better than a streaking blob by as much as a point looks better than a blob.​ By analogy in the world of stills, an astrophotographer capturing a star streak still cares about resolution.​ If we're talking about a truly Parkinsonian cameraman or Jason Bourne fight scene, it may be another matter.

Even if that isn't the case, we should probably be careful of overestimating the proportion of scenes affected by motion blur. In the experimental stuff I film​ and watch​, it's pretty low​. Elsewhere, scenes in which both foreground and background are both blurred are ​firmly ​in the minority (at least according to this viewer). Just having one element that is static is enough to lend the impression of overall detail to a scene, ​whence the value of sufficient resolution. Even brief moments of stillness in an otherwise movement-filled scene can give this impression.

​As for whether a "debinning" algorithm can produce gains ​qualitatively different from those of a scaling algorithm, I'll have to defer to someone more knowledgeable than I. Since the binning occurs at the analogue level (see the fantastic thread on pixel binning patterns in LiveView---https://www.magiclantern.fm/forum/index.php?topic=16516.0), you are presumably talking about some kind of rebayering, followed by a second debayering step. Whether or not this would (or could) be non-zero sum, I don't know.
5D3 / 100D

mlrocks

Quote from: Jonneh on July 19, 2021, 10:33:59 PM
​I do agree with the thrust of your argument (diminishing returns). A couple of counterpoints:

I haven't looked into the extent to which motion blur annuls gains in resolution, but it's at least plausible that a streaking point looks better than a streaking blob by as much as a point looks better than a blob.​ By analogy in the world of stills, an astrophotographer capturing a star streak still cares about resolution.​ If we're talking about a truly Parkinsonian cameraman or Jason Bourne fight scene, it may be another matter.

Even if that isn't the case, we should probably be careful of overestimating the proportion of scenes affected by motion blur. In the experimental stuff I film​ and watch​, it's pretty low​. Elsewhere, scenes in which both foreground and background are both blurred are ​firmly ​in the minority (at least according to this viewer). Just having one element that is static is enough to lend the impression of overall detail to a scene, ​whence the value of sufficient resolution. Even brief moments of stillness in an otherwise movement-filled scene can give this impression.

​As for whether a "debinning" algorithm can produce gains ​qualitatively different from those of a scaling algorithm, I'll have to defer to someone more knowledgeable than I. Since the binning occurs at the analogue level (see the fantastic thread on pixel binning patterns in LiveView---https://www.magiclantern.fm/forum/index.php?topic=16516.0), you are presumably talking about some kind of rebayering, followed by a second debayering step. Whether or not this would (or could) be non-zero sum, I don't know.

The link to the binning/debinning topic is very nice. Thank you.
In terms of motion blur, actually it is getting really severe in Hollywood features. It is a major way that Hollywood is differentiating itself from the rest of the world. Nowadays Netflix and Amazon original shows are replacing the previous big network TV shows. These shows don't rely on action as much as Hollywood features do. But there is a trend that soap opera shows are following Hollywood features' style. On the other hand, TV shows do not require extreme high resolution cameras because the distribution channels are typical cable and internet which are limited in bandwidth. The main reason behind this trend is that the easy availability of the large sensor cameras and large aperture lenses at low budget. Hollywood and big networks are using all kinds of camera movements, green screen and CG, tons of lighting, good acting, and special sound effects to differentiate themselves from the Indi movie makers and youtube content providers.
If you go to some online camera tests, most of them don't do real scenario shooting, the subjects and the cameras are typically still. So that high resolution still benefits. For field shooting, when a lot of camera and subject actions are involved, resolution is not that important. This is why Arri Alexa series is getting popular in Hollywood. Alexa excels in dynamic range and color science especially in skin tones. Even in fast moving scenes, the benefits of these are easily seen.

BatchGordon

About a better way to manage debinning... I had an idea for a custom debayering (some months ago) that could give us a bit more horizontal resolution, at the expense of some vertical resolution.
It could work but... the problem is that it needs the green pixels to be binned in a "shifted" way between lines, as shown in the following post:
https://www.magiclantern.fm/forum/index.php?topic=16516.msg210484#msg210484
as you can see the middle green pixel of the binning group of one line falls right between two groups of the following one (just forget the line skipping of the picture, it's not our case for 5k anamorphic). Substantially, it's a three pixels shift.

But... after some checking and tests I can confirm that, at least on the EOS-M and the latest ML version of Danne, the green pixels are binned with a single pixel shift between the line and the following one. So my idea cannot work on the current recorded material.
In case someone knows how to change the shift in binning (I played with registers with no success)... there could be a chance to test it, otherwise I think there's no way to improve the current quality.

Levas

Quote from: Jonneh on July 19, 2021, 04:20:37 PM
"Read out" is a bit of a catch-all term though. Is there consensus on where exactly the bottleneck lies? Since fast CF and SD cards (w/ overclocking) see over 90 and 70 MB/s, respectively, and combined speeds don't surpass 135MB/s, it doesn't seem to be in the (final) write step. Is it known to lie at the sensor (analogue) level?

The read out speed I'm talking about is literally the time it takes to read out the sensor.
You might think that the sensor readout is very fast, since you can take pictures of 1/4000th of a second, right ?
But when you're taking a picture, the sensor is actually exposed to light for about 0.2 seconds.  (the time it takes to read out the whole sensor in full resolution)
The shutter is the only reason you get your 1/4000th or even 1/60th of a second exposure.
The shutter closes and no more light comes onto the sensor, the sensor is recording light for the full 0.2 seconds.

A reasonable approach to estimate the max readout speed of most Canon DSLR's is the max burst speed per second, multiplied by the sensor resolution.
In case of the 6d, the advertised max burst speed is 4.5 frames per second. 4.5 x 20Megapixel = 90 Megapixel per second.
For 5d3 the advertised burst mode is 6 frames per second, 6 x 22Megapixel = 132 Megapixel per second.

So for pure UHD/4K resolution, sensor read out speed will be a bottleneck.

Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.
6d can do 2880x1200x25fps in crop mode, but not continuous in 14bit lossless raw  :-\

Levas

Quote from: IDA_ML on July 19, 2021, 01:22:25 PM
Thank you, Levas.  Could you please explain to me what 6:1 compression techniques are?  Is this some kind of lossless RAW compression or is it something else?

It's about how much the data is compressed.
A video frame of 1920x1080x14bit = 29030400 bits, divided by 8 -> 3628800 bytes, divided by a million -> 3.628800 Megabyte.
So a non compressed 14 bit 1920x1080 frame is 3.6 Mbyte in size.
With 6:1 compression rate, your file size becomes 6 times smaller. (so instead of 3.6Mbyte it would become 0.6Mbyte)
In ML 14 bit raw is none compressed, so compression ratio is 1:1.
Then a few years ago, Alex found out about lossless LJ92 compression available in camera (The standard LJ92 compression, also used for lossless compression by adobe DNG converter).
This is a lossless compression which makes the filesize about 33% smaller, so ML lossless raw compression has actually a 1.5:1 compression ratio.

Other camera manufacturers of have options for 6:1 compression ratio, the 8K raw in the R5 is done with Canon cinema lite format which has an advertised compression ratio of 5:1.
Most of these are advertised as lossy compression formats and not lossless...but how the compression is done is in most cases kept a mystery by the manufacturers.
But there should be definately some color/detail loss along the way.

Kharak

I think the LJ92 compression is closer to 40-50 % reduction, depending on brightness of scene.

The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

The R5 8k "raw" has less latitude compared to ML RAW, but that is not surprising with the amount of compression.
once you go raw you never go back

Levas

Ah yes it's called LJ92 compression and not J92 compression and 5dr is ofcourse R5 (fixed it in my post  :P )

What you say about the R5 raw footage confirms my expectations.
The Canon cinema raw lite file format could be considered more like a new codec type recording option then a raw format recording option.

probably all those high ratio raw formats like 6:1 and 5:1 could be considered as new codec types to choose from (instead of H.264 or H.265).

Since there is no way to get a raw image sequence from the R5 CRM files, I wouldn't even be surprised if it isn't even intraframe compression (All-i) but some IPB compression on raw data.
Could be the case, since it's one big file, nobody knows what data is really in there  :P


Levas

Quote from: Kharak on July 20, 2021, 12:36:51 PM
The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

Curious, which raw format are you talking about, looking at the 5R specs and it seems to have 3 options for raw.
Where the 2600Mbps option is called raw and not raw light  ???

8k Raw (29.97p/25.00p/24.00p/23.98p): Approx. 2600 Mbps
8k Raw (Light) (29.97p/25.00p): Approx. 1700 Mbps
8k Raw (Light) (24.00p/23.98p): Approx. 1350 Mbps

Recording is in 12-bit, so if I calculate correct non compressed should be about 10000 Mbps. (12-bit at 24fps)
So the highest options has a compression ratio of 4:1.

IDA_ML

Thanks a lot, Levas, for this explanation.  Am I wrong if I say that ML RAW video has the best quality in terms of data loss (zero) due to compression, among all codecs available to date?

Jonneh

Quote from: Levas on July 20, 2021, 11:23:31 AM
The read out speed I'm talking about is literally the time it takes to read out the sensor.

Very interesting, thanks for the explanation.

Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?

Quote from: Levas on July 20, 2021, 11:23:31 AM

Not much of a problem though, because at the moment, writing speed is the biggest bottleneck.

If this is the case, why is it that the maximum observed speed of around 135 MB/s when card spanning is less than the sum of the max speeds to CF and SD cards (approx. 90 + 70 = 160 MB/s) when not card spanning?
5D3 / 100D

mlrocks

Quote from: IDA_ML on July 20, 2021, 01:49:24 PM
Thanks a lot, Levas, for this explanation.  Am I wrong if I say that ML RAW video has the best quality in terms of data loss (zero) due to compression, among all codecs available to date?

Arri Raw is uncompressed unencrypted. Other than that, Sony and Red and Black Magic all have a compression ratio of 3 as the best option. I'd say that you are right that ML raw is the best in the industry.

mlrocks

Quote from: Kharak on July 20, 2021, 12:36:51 PM
I think the LJ92 compression is closer to 40-50 % reduction, depending on brightness of scene.

The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

The R5 8k "raw" has less latitude compared to ML RAW, but that is not surprising with the amount of compression.

It will be interesting if you can do a test on the R5 8K raw with different compression ratios vs the 5D3 anamorphic 6K raw. See if the 5D3 anamorphic 6K raw can measure up to the R5 8K raw. Of course the best way is to compare BMPCC or Red Komodore 6k raw to the 5D3 anamorphic 6K raw, see how much the difference is.

theBilalFakhouri

Quote from: Jonneh on July 20, 2021, 03:15:31 PM
Just to check that I'm following your maths, the 3.5K crop mode is 3584 x 1730 = 6.2 megapixels per frame. Recording at 24 fps we get 148.8 megapixels per second, which would seem to surpass the 132 MP/s you mention. What's going on here?

There is no 3584x1730 crop mode in 5D3, native crop is 3584x1320 1:1 @ 30 FPS, using crop_rec module there are 3072x1920 1:1 @ 24 and 3840x1536 1:1 @ 24 FPS, all of these = ~142 MP/s
In custom Danne build for 5D3 there is 3264x1836 1:1 @ 24 FPS = ~144 MP/s .

Fun fact: there is Full-Res LiveView mode which is 5784x3870 @ 7.4 FPS = ~166 MP/s . . there are other limits we need to figure out where it comes from, for example we can already do 3072x1920 @ 24 FPS in 1:1 mode, but we can't achieve 1920x3072 @ 24 FPS in 1x3 Binning (anamorphic) mode on 5D3, even if it the same read out speed . .

These limits are probably related to FPS timers (at certain value it would give corrupted image/freeze the camera even if we didn't hit the sensor speed limit, probably we need to tweak other registers related to FPS timers on 5D3 which we don't know it yet).

In high FPS, sensor speed becomes more limited e.g 1920x1040 3x3 @ 48 FPS = ~96 MP/s . . why? (again, maybe related to FPS timers or other related registers . . or we are missing some info about sensor speed and how it should be calculated).

IDA_ML

Quote from: mlrocks on July 20, 2021, 04:02:40 PM
Arri Raw is uncompressed unencrypted. Other than that, Sony and Red and Black Magic all have a compression ratio of 3 as the best option. I'd say that you are right that ML raw is the best in the industry. 

Well, if that is case, why nobody among the big manufacturers uses ML RAW?  How come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other developers financially to continue their efforts in further developing ML?

Just imagine a compact camera that can film true 4k 14-bit ML RAW 1:1 video with lossless compression, (583 MB/s are required for 4k@60 fps according to Levas's formula) and a mSATA SSD instead of a CF card to get 600 MBytes/s write speed for continuous recording and no overheating issues! For a company like Canon, developing a compact model with such an interface would be a piece of cake!  Why don't they do it? 

names_are_hard

QuoteHow come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other devepers financially to continue their efforts in further developing ML?

Hacking an existing cam to do a new raw mode is impressive, but if you're making new hardware it's simple.  Dumping raw frames to disk is about the simplest way you can do things.  And it's (probably) cheaper and more predictable to use your own full time staff, that you're likely already paying anyway.

They don't make this cam because it's expensive to hit the data rates to do full frame raw, and it's assumed there's no market for it at the price that would be required.  I'd guess they're right about that assumption, most people - even in the world of film making - simply don't need it.  You can get (very expensive) industrial / scientific cams that do work this way.

mlrocks

Quote from: IDA_ML on July 20, 2021, 05:04:39 PM
Well, if that is case, why nobody among the big manufacturers uses ML RAW?  How come that noone ever came to ML and said - What you guys are doing here is fantastic.  Why don't you let us use your ML RAW method in one of our new camera models and we will help A1ex and the other devepers financially to continue their efforts in further developing ML?

Just imagine a compact camera that can film true 4k 14-bit ML RAW 1:1 video with lossless compression, (583 MB/s are required for 4k@60 fps according to Levas's formula) and a mSATA SSD instead of a CF card to get 600 MBytes/s write speed for continuous recording and no overheating issues! For a company like Canon, developing a compact model with such an interface would be a piece of cake!  Why don't they do it?


5D3 and ML disrupted the whole mid to low cinema videocamera market. Only very few high end cinema cameras can compete with 5D3 ML raw. I remembered many years ago some one did a test on Alexa Classic, Red One, and 5D3 ML Raw 1080p 3x3. The conclusion was that 5D3 ML matched up very closely to Alexa Classic, only the highlight rolling off was less than Alexa Classic. But which camera on the earth can beat the highlight rolling off on Alexa then?

Even now with 5D3 ML, if a test confirms that 5D3 ML 6K anamorphic raw in general matches Red Komodore, BMPCC 6K Pro, this will go viral throughout the internet. My now "outdated ancient nobody-wants-it" less than 200-bucks 650D can do 4.5k raw, yet a brand new all acclaimed Canon C70 with a price tag of $6000 can only do 4k without raw?! Imaging an one hundred bucks second hand unknown EOS-M ML 5K raw can do what a fanboy Red Komodore does, yet with a pocket size, then who is the real fanboy, what does this mean to the camera manufacturers and to the video makers and to the Hollywood?!

All these above 4k and above 135 full frame games are for differentiation to justify extremely high profits in camera making industry. They have nothing to do with general audience's viewing experience. If Canon did not change the firmware code, and Alex can do the same on 5D4 R5, Sony Canon Arri Red will all be out of business in a few years or shrink the size significantly.