[WONTFIX] Variable bitrate according to buffer level

Started by Leon, July 14, 2012, 01:08:48 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Leon

Since ML seems to be able to monitor the buffer level, would it be possible to encode at the maximum bitrate possible without video stopping?  If the buffer starts to fill, ML would reduce the bitrate as required.  It may be risky (in terms of unpredictable stopping) but there are times when that would be ok and it could just be restarted.  Also, for such times a lower target buffer usage could be set, eg only 50-60%.

And/or perhaps a system where you can set a VBR setting, eg -12 (high quality), or even -16, but ML would automatically reduce it if the buffer starts to fill, and could then gradually increase it later if the buffer remains unfilled.  This would allow higher quality at the same time as allowing the bitrate to drop lower in simple areas/scenes.

a1ex

First you have to provide a clear proof that it's worth the effort. Otherwise, wontfix.

benoitvm

I support this request and formulate it somewhat differently:

"Allow VBR with a (user-selectable) max peak bitrate (the latter to avoid recording stop due to either SD speed limit or buffer overflow)"

Much like some retail MPEG-2 encoding programs, where you can specify an overall VBR quality level, AND a minimum AND a maximum bitrate.

Rationale: you get the benefits of constant quality VBR but you are not impacted by occasional bitrate peaks.
70D (W) since Dec 2014

a1ex

QuoteAllow VBR with a (user-selectable) max peak bitrate
.
This is exactly how current CBR works.

Leon

My understanding is that the current CBR works between a maximum and minimum quantiser value, rather than bitrate. This means that if I set a high value for CBR or high quality Q-value (ie, a low quantiser), then it would be fine for low complexity scenes, but when a complex scene occurs, the buffer will overrun and video will stop.  The only solution to this is to limit the video setting.  However, this means that most of the time the video is not being encoding at the best quality.  Furthermore, I need to change the video settings depending whether I put in a 90MB/s card or a 6MB/s class 6 card.  Not only that, I find if I have just formatted the card I can use a higher setting than if the card has been used a while.

So, it would be really great to have an option where the buffer is monitored and the bitrate will be automatically lowered automatically if it starts filling.  It could work simply by increasing and decreasing the CBR setting (or Q-value) automatically depending upon buffer level.  I'm sure a lot of people would find this very useful.

Many thanks!

1%

I can see this being somewhat useful. On dark scenes I can go almost all the way up to 3.0x. On normal bit rates clips end up being like 19mbps.. On 3.0x they would naturally be higher.

It does seems like a PITA to implement and there is still other stuff to do.

a1ex

CBR has a min/max quantizer (+/-16) and a max bitrate limit (what you set in ML menu).

Leon

Ok, but there's still no way of preventing the buffer over-running in complex scenes, without sacrificing the quality of normal and low complex or dark scenes.

When I read forums discussing Canon SLRs for movie making many people still complain that the encoding quality is not high enough, and certainly artefacts can become apparent whilst grading or editing.  I'm sure a lot of people would appreciate being able to push the quality higher in normal and low complexity / dark scenes.

Furthermore, people currently have to test each type of card they own and guestimate what CBR setting is the highest safe one.

I'm experimenting with filming analog static on an old TV with a high ISO setting to simulate a super-complex scene.  I intend to compare different ISO settings, and expect to find that I have to use a lower CBR setting at higher ISOs and shorter shutter speeds because of the extra noise and sharpness respectively.  Additionally, Picture Style and contrast and sharpness will surely affect it too.  And even after all this I will end up using a lower CBR setting than the best for most situations.

A feature to automatically prevent buffer overruns in all situations would just be so handy.

Thanks!

1%

I have 2 Modes
VBR or CBR.  One sets the qscale (stay as close to qscale X), one uses the "x" factors.

I'm a bit lost there but CBR is supposed to adjust Qscale (or its own quantizer) on the fly already? Buffer warning only shuts graphics off... maybe its should also lower the factor? Or does it already lower qscale but never in time?

The big question... how would you know when to raise it? A sudden jerk can stop video.

a1ex

So far, nobody was able to show that higher bitrate results in better quality (at least no ML moderator could confirm it).

CBR reduces the quality whenever the instant bitrate is above limit, and increases it when it's below. That's Canon's algorithm and can't be modified. Old ML had a CBR implemented from scratch, but it couldn't track the bitrate changes fast enough.

1%

Can buffer warning level adjust function adjust the limit while recording without a crash?


How would you prove bit rate changes quality? Chroma key? There was a guy on google groups wanting to change the encoder profile but I'm not sure if that is even an active part of encoding or just something it writes to the files.

a1ex

Buffer warning just keeps the CPU usage low in ML tasks.

To prove that high bit rate results in better quality, one has to post something shot at high bitrate compared to the same thing shot at normal bitrate, and there should be a visible difference between them.

Leon

I have been encoding with x264 for 6 years and can clearly see encoding artefacts in many of my video files from my 550D.  In all other circumstances I can think of, increasing the bitrate reduces artefacts.  Is it not safe/appropriate to assume that increasing the bitrate in a Canon DSLR would reduce these encoding artefacts?  If not, why not?  Encoding artefacts are due to insufficient bitrate, so increasing the bitrate should (always) improve them.

Differences do not need to be visible straight out of the camera.  Many DSLR movie makers find that the artefacts become much more problematic once they start grading and adjusting levels!  Lightening dark scenes/areas is particularly problematic.

Additionally, for example, I can't even encode reliably in all conditions at the Canon firmware default with my Verbatim cards that are sold as "Class 6", so it would be useful even to be able to use CBR 1.0 with the added feature of automatically reducing the bitrate slightly when the buffer is filling too quickly/much.  Again there is the problem that CBR 1.0 is probably fine for a mostly out of focus low-motion scene, but I would have to manually reduce it (definitely causing artefacts!) if I anticipate complexity.  (Unfortunately, two years ago I bought three of these 16GB class 6 memory cards and basically couldn't use them for video before MagicLantern came along!)

Ta.

a1ex

OK, show an example with grading where the difference is visible.

Leon

Is this not effectively, in the end, just the same as saying, "Show me that there's any benefit in going above CBR 1.0, or whatever your card can currently reliably manage?"  (Such as CBR ~0.8 for my Verbatim cards!)

I'm also not sure exactly what you're asking me to do.  I mean it could be one of many things...

1) Are you just asking me to show a framegrab where after realistic editing, encoding artefacts are visible?  This will not prove that increasing the bitrate in the first place would have improved it.  (Though a higher bitrate would have improved it with x264.)

2) Or are you asking to record a scene at the highest CBR my card can reliably use for complex scenes, then replicate the scene and motion as close as possible and re-record it at a higher CBR (that hopefully is momentarily OK) and demonstrate a difference after grading and levels adjustment?

3) Since the proposed feature doesn't exist (yet  ;) ), and even then I couldn't record from the sensor simultaneously at the user-determined highest reliable CBR for that card, and with another encoding algorithm that pushes the buffer harder...  argh...  Well, I'm just not sure how best to demonstrate this.

I'm not so concerned with possible benefit specifically for colour grading alone anyway, since I'm personally not going for big changes in colour, though of course others sometimes change the colours a lot.  For me, it's more about pushing the card to it's limit with complex scenes (without the buffer overfilling), and being concerned about artefacts visible after post-production that affects the tone curve.

I'm not trying to be awkward, and I'm sorry this thread is taking up your time; it's just that whenever encoding artefacts are visible, increasing the bitrate should help, and there is currently the limitation that one CBR setting is fine for low-complex scenes, but will buffer overrun on a high complexity scene.

a1ex

Just convince me that higher bitrate results in noticeable better quality. But not with a ton of text.

Marfre

Quote from: a1ex on July 18, 2012, 08:43:34 AM
Just convince me that higher bitrate results in noticeable better quality. But not with a ton of text.

The best video I always use to convince people about the use of CBR



Not my own but it's great for showing the differences changing the bitrate can have on footage

kgimedia

Is there proof that CBR 3.0 is better than CBR 1.0?  The video above shows CBR 0.3 vs. CBR 3.0. Even that in the 720 on Youtube most viewers would not even notice.


EDIT: I have never noticed a difference. I recorded 1.4 for months and stop because of noticeable difference in quality and it ate up card space faster.

poromaa

QuoteNot my own but it's great for showing the differences changing the bitrate can have on footage

Hm.. I would almost say that the 0.3 looks better than the 3.0. Especially when maxing the saturation (3.0 gets blockier than the 0.3). There might be some sharpness in the black text, but not really sure its due to the codec (since I assume these shots must have been recorded separately?).

I think there can be better test though, much of the picture is fairly static (which is a good thing for a codec). Maybe the difference would be greater if the whole picture was moving... Im going to do some tests.

1%

Some guy got burned shooting an interview at .3x so it can't be better. I'm planning on implementing 2 more encoder modes that let you scale every option independently and then some better testing can happen.

Leon

It seems a load of the posts that were here a few days ago have been moved to a new developer thread titled "Bit rate investigation - altering ratio between P and I frame sizes" at:

http://www.magiclantern.fm/forum/index.php?topic=1565

It includes frame grabs showing differences between different quality settings.

benoitvm

Quote from: a1ex on July 14, 2012, 12:51:16 PM
.
This is exactly how current CBR works.

Yes, but the difference between this CBR and what I ask, is that in the case of VBR (with max), when shooting "easy-to-encode" scenes (low detail, low contrast, mostly static), the constant-Q rate would drop dramatically and hence, fill up the card much slower...
70D (W) since Dec 2014

benoitvm

Quote from: Leon on July 17, 2012, 07:22:43 PM
I have been encoding with x264 for 6 years and can clearly see encoding artefacts in many of my video files from my 550D.  In all other circumstances I can think of, increasing the bitrate reduces artefacts.

I have been wondering why encoding 1080p video on a computer with x264 at 10~15 Mbps produces much less artefacts than recordings made on my EOS 600D at ~45 Mbps (CBR 1.0), especially considering that the video produced by the 600D is not nearly as sharp as the video produced by good HD camcorders (which I use when encoding with x264). The only answer I found is that x264 has much more CPU power available, and that the H.264 encoding engine found in the DSLR is much less sophisticated due to lower CPU power available. Hence the huge gap in efficiency of the encoder.
And therefore, the limiting factor being the DSLR chip, the equation [higher bitrate = less artefacts] might well not be as scalable as on a PC...i.e. above a certain threshold, adding more Mbps might not proportionally reduce artefacts...and perhaps even, CBR 1.0 might be a reasonable sweet spot ???
Does this sound reasonable ?
70D (W) since Dec 2014

Leon

The CPU power is a massive factor, because it allows more and more features of H.264 to be used whilst maintaining the required frame rate.  RAM has a lesser effect.  H.264 has many features the camera will not be using.  Even on a quad-core computer I re-encode 1080p slower than real-time because because encoding in real-time would mean the files would have to be something like 5-10% larger for the same quality.

The camera could use a couple of the H.264 features I use, but it will not be using most of them:

In-loop deblocking filter,
3 Reference frames,
3 B-frames,
CABAC entropy encoding,
8x8 transform,
Weighted p-frames,
Pyramidal b-frames,
Adaptive b-frames,
Uneven multi-hexagonal motion estimation,
Subpixel motion esitmation level 7 (RD in all frames & Psy-RD),
Adaptive quantisation,
Psychovisual rate distortion,
"Most" partition types,
Psycho-visual Trellis quantisation.

Together they have a massive effect on the file size.  I don't think it uses any B-frames, which have a big effect alone (something like 30% better compression with 3 B-frames than no B-frames).

Leon

It is possible that the camera/encoder forces some decisions that mean that extra bitrate does not have as big of an effect as we would hope, but I think it's very unlikely that CBR 1.0 is the limit of usefulness.  It may well be a "sweet spot" in that it gives a good balance between quality and file size.

Any CBR value is not a quality.  For example, CBR 0.7x may look great for an easy scene (because it is using low Q-scale values of compression) but it certainly doesn't look good on a difficult scene.

It would make much more sense to suggest that there may be a Q-scale value above which there is little discernible improvement.  However, the difference between Q-scale -8 (high quality) and -16 (very high quality) is noticeable on a frame-grab.  And trying to record at only Q-scale -8 was enough to repeatedly choke my just-formatted 45 MegaByte/sec SanDisk card (UDMA-1, 4.5x faster than Class 10) on a medium complexity scene!

It would seem counter-intuitive to say that lower compression Q-scales don't matter when they are producing large bitrates, but do matter when they are producing medium bitrates.  Furthermore, even if that were the case, you could have a scene where one side is stationary/unchanging but chaos is ensuing on the other side.  The argument would then be that one half of the scene needed a different Q-scale to look good than the other half, and how would that be dealt with?


To me it just seems logical (and not difficult) to keep a eye on the buffer and increase the Q-scale (to decrease the data-rate) if the buffer is filling up.  I have a 16GB Class 6 Verbatim card (ironically marketed as a "video" card) which I have to use at only CBR 0.8x.  Video stopping unexpectedly is one of the worst things about recording with a DSLR!