Yes, optical flow can be used to simulate motion. Of course the simulation is based on a simplistic assumption of motion, not what really happened in the scene. For example, a light moving between two frames could actually be two alternating lights. The practical impact of such a model when applied to a subset of pixels is hard to predict, and image-dependent. Optical flow is also compute intensive.
Heh, if optical flow was perfect, it could be used to generate all the missing exposures in the original ISO flip-flop model, and eliminate temporal artifacts in the first place.
At 48/60fps, the ISO flip-flop method has its drawbacks, but the artifacts are less mysterious. Another neat benefit is that you essentially get two videos for one, both 24/30p and 180 degrees. Merging is not required, but could be used only when recovery is needed.
Or... I think the knock-out merge algorithm would use a motion vector threshold to forfeit DR when it will produce a (significant) artifact. That way static scenes are 100% HDR, and less so as motion gets crazy.
To me, all this new stuff is more interesting wrt supersampling than it is to slo-mo and rez-mania. This video drives it home:
Look at those 1080p shots from the Alexa.