Is there no way to read just the data from the green pixels and discard red and blue directly off the sensor? This would already reduce the data by 50% on a Bayer grid (and either double the resolution or the crop factor).
No, it would actually reduce the resolution by some amount up to 50%, depending on how good a demosaicing algo you use. If you used a naive algo like 'super pixel', where each group of 4 pixels in the raw becomes one output pixel (maybe this is how you think raw is processed?), then yes you would increase resolution by 50%, but this is not how demosaicing is done. With a good algo it's possible to recover almost all of the resolution by interpolating pixels based on their neighbor's values (e.g. a red pixel's green and blue values are interpolated from the green and blue pixels around it)
http://en.wikipedia.org/wiki/DemosaicingIt will increase the resolution by 4x
Have a comparison to prove it? (of course, against a good debayer algorithm, for example AMaZE)
It would increase resolution by
up to 4x, depending on the subject matter and the quality of debayer algo.
If you were an astrophotographer taking a photo of a HII region that only emits light at one specific (red) wavelength, then yes it would increase resolution by 4x, b/c there would be absolutely no useful information in the green and blue pixels (This is why APers do this).
If you're taking a photo of a rather smooth subject without a lot of hard edges, in normal white light, then you'll be able to recover almost all the resolution with a good algo, and you'd see almost no improvement in resolution with monochrome.