Is this not effectively, in the end, just the same as saying, "Show me that there's any benefit in going above CBR 1.0, or whatever your card can currently reliably manage?" (Such as CBR ~0.8 for my Verbatim cards!)
I'm also not sure exactly what you're asking me to do. I mean it could be one of many things...
1) Are you just asking me to show a framegrab where after realistic editing, encoding artefacts are visible? This will not prove that increasing the bitrate in the first place would have improved it. (Though a higher bitrate would have improved it with x264.)
2) Or are you asking to record a scene at the highest CBR my card can reliably use for complex scenes, then replicate the scene and motion as close as possible and re-record it at a higher CBR (that hopefully is momentarily OK) and demonstrate a difference after grading and levels adjustment?
3) Since the proposed feature doesn't exist (yet

), and even then I couldn't record from the sensor simultaneously at the user-determined highest reliable CBR for that card, and with another encoding algorithm that pushes the buffer harder... argh... Well, I'm just not sure how best to demonstrate this.
I'm not so concerned with possible benefit specifically for colour grading alone anyway, since I'm personally not going for big changes in colour, though of course others sometimes change the colours a lot. For me, it's more about pushing the card to it's limit with complex scenes (without the buffer overfilling), and being concerned about artefacts visible after post-production that affects the tone curve.
I'm not trying to be awkward, and I'm sorry this thread is taking up your time; it's just that whenever encoding artefacts are visible, increasing the bitrate should help, and there is currently the limitation that one CBR setting is fine for low-complex scenes, but will buffer overrun on a high complexity scene.