Two years ago I've made a short movie with RAW and non-RAW footage from 5D Mk II (I had only one fast card, so I couldn't shoot it RAW etirely) and for me it's very cleraly visible which scenes are shot in RAW, and which in compressed format. I had also made an experiment: I had shown that movie to a few persons (definitely not film or color specialists), and I asked them to try to recognize footage shot in "better format", as I described RAW to them. The results were quite interesting. I got on average 75% correct guesses which takes were shot in RAW, and which not. And the "mistaken" takes were mainly H.264 shots with not so much motion - compression artifacts in them were not so obvious, and they were taken as a RAW footage by the viewers.
So if you mix in-camera H.264 with RAW, it will be probably recognizable for some viewers, but of course it does not mean, that you shouldn't do that and your movie won't benefit on it (e.g. if a second camera will allow you to get a different angle with more interesting framing and so on). In "Phantom Menace" they mixed 35mm film with footage from very early HD camera (which - again - was quite visible for me), but the movie was in theaters, and made a lot of money.
So if you mix in-camera H.264 with RAW, it will be probably recognizable for some viewers, but of course it does not mean, that you shouldn't do that and your movie won't benefit on it (e.g. if a second camera will allow you to get a different angle with more interesting framing and so on). In "Phantom Menace" they mixed 35mm film with footage from very early HD camera (which - again - was quite visible for me), but the movie was in theaters, and made a lot of money.