I'm coming round to dyfid's way of thinking on this. My initial concern was the apparent limitations of the Rec709 colour space
Apparent limitations is a good description, but as you infer it's unfounded for raw.
Rec709 gamut is not an issue, it's generally the gamut the majority will view the final video in, a sRGB monitor of questionable ability, a Rec709 LCD or LED TV or even Rec709 home cinema projector.
Rec709 gamma curve would be a limiting factor if encoded into a lossy compression such as the h264 profile Canon's use at 8bit.
But in the case of raw, there is, even in Resolve 11 far more flexibility to revisit the raw data previewing through the Rec709 gamma transform, which is where the final video is heading anyway, whether it's transformed in a controlled way via an output LUT or just injected with contrast & saturation to suit.
It's totally different to dealing with Rec709 h264, as we all know, where you can't keep going back and revisiting the raw data adjusting highlights, shadows, lift, gain, white balance etc because the camera's done it for us. All that is done prior to transforming to Rec709 colour space and whatever gamma is chosen.
The Rec709 raw transform in Resolve is 16bit not 8bit, so thats 65536 levels to play within at 32bit precision and it's considered that even with a typical 2.2 - 2.4 gamma encoding on 16bit data, that 16 f-stops of DR can be comfortably distributed albeit not linearly. Choose linear for gamma with Rec709 in Resolve and it's even more comfortable but not good to grade in linear.
Absolutely no point imho to use log unless going to an intermediate for edit and grade outside of Resolve. That's what log is for, to efficiently maximise data in minimal bit depth. 8 & 10bit being most common. But again not raw data.
Where does log play a part in a typical raw development application like Lightroom, UFRaw, Rawtherapee, Darktable. No where to be seen because it's pointless.
In fact I'd suggest the future for raw development in Resolve, maybe next release is to increase the L*a*b toolset. In 11 BM introduced L*a*b colour space for the first time, that's a pointer to where they're heading. You can only do so much with typical RGB tools, 1D LUT's (lift gamma gain & curves).
and like most I'm reluctant to compromise any image quality therefore the expanded gamut BMD Film appeared to be a better choice, without necessarily knowing exactly what any of that in fact means.
However, considering the initial access to RAW data and 32bit floating point (so no clipping) then perhaps that's unnecessary?
Expanding the gamut via BMD Film is one thing but applying it to a camera with differing sensor capability is another. It gives a look, but whether it's detrimental or not. To me BMD is been promoted by the LUT creators to provide a flat log appearance as a starting point for their heavy LUT's in order to maximise the instant gratification and minimise the 'it all looks too contrasty and saturated'. Rather than work with primary grading on the raw to get it where you want and then apply the look LUT last.
raw process as I understand it is raw -> WB -> demosaic -> adjust exposure etc -> scale into output bit depth -> intermediate colour space XYZ -> Transform to output colour space (Rec709) -> Apply gamma curve or linear -> output
Every time we adjust WB the raw is demosaic'd again and the cycle continues.
3D LUTS are destructive so surely they should be avoided unless entirely necessary? I know Peter Doyle refuses to use them so that's worth considering.
There's someone to aspire to for anyone who describes LUT ***kery as 'Grading'. A guy who uses L*a*b, custom tools and has a real passion for the art.
I'd love a definitive explanation on all this as I have a project coming up and I've yet to decide my final workflow approach.
The definitive explanation has got to be test and a 3D LUT is nothing magical its input value mapped to output value with a heap of interpolation in between based on a specific profile of a camera created under set shooting conditions, not a one size fits all.
I'd really like to know the best and cleanest way to emulate film stocks.
Mmm, mentions Peter Doyle and the goal is to emulate film stocks.

Not something he'd do.

Why emulate film stocks, the goal surely is to create imagery that provokes a feeling, a memory. From what I've heard and read he draws inspiration from everywhere other than a freaking film stock.

Joking.
Also, what do people make of the ACES workflow, and is it worth the hassle for the benefits, if indeed there are any?
Nope, not for ML Canon raw.