Confirmed - the 10-bit DNG looks a little brighter as well. Actually, this applies to any lossless preset from 8...12-bit - these use a different
raw type to reduce the bit depth using digital gain.
Interesting - the 8-bit DNG looks best with black level set to 2049; however, the 10-bit DNG from previous set looks bad at 2049. Why?!
The scaling factor between the two is about 1.2 (0.26 stops). Linear regression on x14bit vs x8bit-lossless (one frame) gives [1.1971 1.2035 1.1934 1.2467] (corrections for R,G1,G2,B). The same method, on 14bit vs 10bit-lossless, gives [1.1864 1.1828 1.1842 1.1966]. After averaging all frames in the MLVs, linear regression gives [1.1878 1.1952 1.1854 1.2060] and [1.1799 1.1764 1.1793 1.1718].
Uncompressed: 14, 12, 10-bit

Lossless: 14/2047, 12/2049, 8-bit/2049

Lossless: 14/2047, 12/2049 adjusted by -0.26 EV, 8-bit/2049 adjusted by -0.26 EV:

Lossless 8-bit: 2047, 2048, 2049, 2050

Lossless 10-bit: 2047, 2048, 2049, 2050 (from previous set):

Lossless 12-bit: 2047, 2048, 2049, 2050:

The above are 100% crops; click for full images.
Makefile used for rendering.
Averaged MLVs (first set, 14-bit vs 10-bit 2048 -0.26 EV):

Averaged MLVs (second set, 14-bit vs 8-bit 2048/2049 -0.26 EV):

edit: mlv_dump averages at the same bit depth as the input (not ideal); here's the same 14-bit vs 8-bit test, but averaged after 14->16 conversion:

=> there's a 0.5 LSB roundoff error with mlv_dump -a.