Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - NedB

#51
@maxotics: No, I meant it exactly as I wrote it. Rewind is right! Of course I do not know where your research has taken you, so it's difficult for me to refute it step-by-step. But I will tell you that your understanding of debayering is simply incorrect. Each pixel receives a certain amount of light, which is the light coming into the camera through the lens optics, filtered by one (and only one) filter, either red, green or blue. That is, each pixel under a green filter (and about 50% of pixels fit this description) stores the amount of green light hitting it. So also for the pixels under the blue and red filters (25% each of the total number of pixels). It stores this value (per pixel, not per "general area", and not 3 values for R, G and B, but ONE value) as a 14-bit number. The result of all these measurements is the array shown in Rewind's post. If you think of each of these boxes as representing one pixel with a 14-bit value, the "trick" of demosaicing (also called debayering) is to interpolate, from this value and depending on the surrounding values, via complex mathematical algorithms (of which there are many) values of R,G and B for each pixel so as to render an image which looks like what the camera saw. There are no "two steps" to debayering, and no "rolling-up". The sensor data is completely available, and is called the raw data. After debayering, each pixel has a 14-bit value for each of its R,G and B channels

Simply put, everything you wrote between "From my research..." and "Whew" is simply incorrect. Also, as I stated above, Rewind is absolutely correct. Raw data has a single value for each pixel, but you have to remember that the arrangement of the Bayer filter is known, which means that we can use these two things to give us RGB data. In effect, raw data are indeed the values of each sensor pixel. Only after debayering does each pixel have a value for each of R,G and B.

In rereading your post, it almost seems as if you are conflating the idea of a Bayer filter with that of color sub-sampling. I can understand this confusion, because the concepts are somehow quite similar. But they are not the same concept, and in fact raw data is the data coming "raw" right from the sensor. Please don't continue to argue this point, do some more research. If you still have questions I will try to help.

In the meantime Rewind has also replied, and I might just add: "What he said..." Cheers!
#52
@maxotics: You're welcome, no problem. As for the further questions:

1. Right, Canon DSLR's do not capture luma (there is a subtle, somewhat geeky, difference between luma and luminance, which is irrelevant for our purposes, I think...) directly. Rather, the value can be calculated from the values of R,G and B. Each pixel receives a certain amount of light, filtered through either a red, green or blue filter. This amount of light is stored as a 14-bit value.

2. An alpha channel is used to indicate which parts of an image are transparent and which are not. It is simply a grayscale image, where the parts of the image which should be transparent (i.e. where a background plate should show through when layered behind it in, say, After Effects or some other compositing program) are black, the parts which are completely opaque are white, and the parts which are semi-transparent are gray, with the amount of gray being proportional to the level of transparency. I imagine what Y3llow means by "create it downstream" is that there is no reason to transcode Canon raw into any format which has an alpha channel, since this alpha channel (not a luma channel as you may have thought) can be created in a compositing program (by "keying" out a greenscreen, for example) at some later point. The raw frames themselves can remain without an alpha channel. For this reason, ProRes4444 is somewhat of an overkill. However...

3. ...you do have a point in that ProRes422 or 422HQ do sub-sample the chroma (that is, decrease the resolution of the color channels after the signal has been transformed from RGB to YCbCr). So there is some loss in this transcode, but it is probably all but invisible to the eye. If you are not doing a lot of green/blue-screen keying, then you almost certainly don't need 4444.

4. (1280 x 720 pixels x 14 bits/pixel) / 8 bits/byte = 1,612,800 bytes/frame. In this calculation, you will notice there is no RGB. Canon raw video is just that, raw, meaning not yet debayered or demosaiced. So this means that...

5. No, neither Canon nor ML debayers the raw video which ML captures. The only debayering going on in the camera is when you record H.264 video. The Canon feeds the raw video to a "black box" (meaning non-Canon people can't easily see inside it) encoder chip which decreases the resolution by about a third (varies in method and amount and also by camera) and encodes the result (possibly with a small up-rez, for cameras like the 550D, etc.) into H.264 at 1920x1080 or whatever you set in the Canon menu.

The fact that the Canon cameras deliver a down-rezzed (from the full sensor resolution), but not yet debayered, video stream to the LiveView monitor is the happy accident which allowed the ML devs to figure out how to save this raw stream. If this stream did not already exist in these cameras, it is unknown whether it would have been possible to get raw video out of them.

Whew. Cheers!
#53
@maxotics: You've got the Bayer stuff almost right but not quite. (See http://en.wikipedia.org/wiki/Bayer_filter.) A Bayer filter (an array of RGB color filters) sits above the sensor. This causes each pixel below the Bayer filter to be sensitive to the colors Red, Green OR Blue. The filter pattern is 50% green and 25% each red and blue (because the human eye is much more sensitive to changes in the level of green light than changes in blue or red). In the case of the sensors used in the Canon DSLR's, each pixel records a 14-bit value from 0 to 16,383 ((2 to the 14th power)-1). These values are not "de-bayered", but they are, in a sense, "rolled up into one value". It's just that the value simply represents how much of red, green, OR blue light reached the individual pixel during the time the sensor was collecting light (the shutter interval). It's not a mixture of R, G, and/or B. Luminance does not enter this equation at all. It only comes into play when we try to reduce the amount of data we are going to have to move around and/or store, via chroma sub-sampling.

The "raw" output of the Bayer is a Bayer pattern image, which must thereafter be "Debayered". Debayering is a demosaicing algorithm which uses complex mathematics to interpolate values of R, G, and B for each individual pixel. [The resulting data can be thought of as 4:4:4, simply because there has been no compression (or "sub-sampling") of the chroma channels yet. Each pixel has its own values for R, G and B. In contrast, H.264 data (remember that old Canon video stuff??) can be thought of as 4:2:0 (Google it)]. There are many different debayering algorithms, which vary in quality and speed. So ML raw is a sequence of 14-bit values, one for each of the x times y (width x height) of the captured frame. That's why the size (in bytes) of an ML raw frame can be calculated by (width x height x 14 bits / 8 bits per byte) or 1.75 w x h bytes.

There is a lot more in your post to respond to. If you would let me know whether or not the above is helpful, I will elaborate. Some of this ML stuff is quite complicated (my jaw hangs open sometimes, reading a1ex's posts...), and it's no wonder some are confused. I suppose the manufacturers do assume a level of advanced knowledge of the basics which may be above that of the average ML user, or even most ML users. Cheers!
#54
Raw Video / Re: RAW liveview on PC monitor via USB
September 06, 2013, 12:10:49 PM
@Yaros525: Your app works with 550D also (Windows 7, 64-bit). On entering ML menu (on camera), app on PC says "Live View suspended". The camera image immediately comes back to PC when leaving ML menu. If you turn on RAW video, however, and then exit ML menu, Live View remains "suspended" and there is no picture transmitted to the PC. In short: what he said! But very cool app anyway, for H.264 for now. Focus peaking, RGB, HSB (don't you mean HSV?) and Zoom all work as well, as well as indicators for picture style, aperture, etc. Lag seems to be somewhere under half a second. Full screen works great with this app, so that if you have a 1920x1080 monitor, you are seeing your HD image pixel-for-pixel, which is amazing for focusing purposes. Also, it works differently from Canon EOS Utility, which, as far as I know, will not show the camera image in full-screen.

All in all: great start, super monitoring solution for H.264. Cheers!
#55
@dude: If you convert a .raw file (using raw2cdng.exe rather than raw2dng.exe) to CinemaDNG, you can point Scratch to the first .dng file and it will open and play the clip. Pretty cool. Colors are a bit off but Scratch is apparently a huge application, even the player portion, and there's probably a way to insert a LUT somewhere so that the clip will look reasonably like what you shot. Just FYI, haven't played with it for very long. Cheers!

P.S. Not sure anymore where I got raw2cdng.exe, it was on the ML forum somewhere, you should be able to Google it.
#56
@dude: the link (in the Assimilate press release) for the Scratch player has been pulled by Assimilate, apparently, but I found the new location:

http://www.assimilateinc.com/scratch-play-build

Cheers!
#57
@telecastDaveRI: For our "low-information" readers and noobs, we should probably avoid calling Dual-ISO footage "interlaced", although I certainly know what you mean. If you are shooting Dual-ISO correctly (I mean, maybe not optimally, but getting footage where alternating lines have different ISO values), the individual frames look "interlaced", except that the two sets of lines are of course not separated in time by a half a frame, like NTSC or PAL interlaced footage. On the other hand, you can't "de-interlace" this footage in After Effects, because it isn't "interlaced", it's just Dual-ISO, which is a completely different concept. What you are seeing as successfully "de-interlaced" footage in AE is just a result which eliminates or blends alternating lines to make the individual frames look "smooth". But what a1ex does in his dual-iso software (originally an updated version of raw2dng.exe, but recently he is suggesting using cr2hdr.exe) is a fairly complicated mix of interpolation, estimation, prediction,etc., which attempts to combine the best pixels from the alternating-line footage in order to give you an image with an increased dynamic range. I can't stress it enough: though the dual-iso footage looks "interlaced", it is NOT interlaced in the true sense of the word. For what it's worth, I still can't figure out how to do the second step of a1ex's suggested dual-iso video workflow, that is, to "drag-and-drop" the .dng files on top of the cr2hdr.exe executable. When I do that, nothing seems to happen. When I open a command line, navigate to the location of cr2hdr.exe, and run it with a command like "cr2hdr.exe thingy.dng", where "thingy.dng" is one frame from some dual-iso footage, then sometimes this seems to work and produce a frame which no longer looks, um..."interlaced". But sometimes not, and this is obviously not a workable solution if your dual-iso footage contains hundreds or thousands of frames.

I guess we're both waiting for someone to chime in who has this working reliably (on a PC, I might add). It's fun to play with but I wouldn't want to use dual-iso unless I had a situation where I had to, as the resulting aliasing issues make it hard for me to accept the resulting images, noise-free (or close to it) though they may be. Cheers!
#58
@telecastDaveRI: Would you mind telling us exactly what ML you're using and also what cr2hdr.exe and raw2dng.exe (links if you have them)? I had an experience similar to yours where I thought it didn't work and then all of a sudden I thought, "Oh, yes it did". So I put dual-iso aside for a few days, hoping it would become clear what the issue was. If you'd tell us exactly how you got your results, I'll try to reproduce and then report back. Thanks in advance and cheers!

Edit: Oh, and if you don't mind, what dual-iso settings/resolution/etc. Just to eliminate other causes of "failure"...
#59
-deleted-
#60
1% or anyone similarly knowledgeable: Is there a build with this functionality available yet for the 550D? If so, could you point to a link. I read the entire thread, but there are only little hints that it might be found somewhere. In any case, thanks for answering! Cheers!
#61
@muffins2u: The DNG files have no intrinsic frame rate, since they are a series of images, and not a video file. After Effects has a default setting which simply guesses what the frame rate might be. In your case, the default setting, which is the case if you never change it after installing After Effects, is 30fps. To change this, right click on the imported "footage" (i.e., the DNG sequence)>Interpret Footage>Main. See the "Frame Rate" box? Here's where you set the frame rate of this imported sequence of images to whatever you want. If you want to change the AE default, Edit>Preferences>Import. Change the "Sequence Footage" default to, again, whatever you want. If you are always shooting 23.976 (23.979 does not exist as a standard frame rate), then simply type in 23.976 here and in the future, AE will "guess" that every sequence you import should be assigned the frame rate 23.976.

So now you understand what was happening in AE, right? AE gave your sequence a Frame Rate of 30fps, so when you exported as 30fps, it looked smooth, but probably a bit fast. When you tried to export it as 24 or 23.976, it was jerky because AE was trying to blend and drop frames to accomodate your wishes. But if your Interpret Footage setting had been correct, the opposite would be true: 30fps exports would be jerky, but 23.976 would be smooth, as it represents the sequence's "native" frame rate.

This is very basic knowledge that those who work with AE on a regular basis learn at the beginning. AE is a huge and very complex program and you can't expect to just "use" it off the bat without there being some things you don't understand. It would be better to try Google next time and assume that the problem is just that you are lacking knowledge, and not that the camera, or ML, or AE, or some other program is at fault. Cheers!

Damn, I see that Andy600 has already replied with his customary pithy and excellent expertise: what he said...
#62
Raw Video / Re: ML Raw vs. Log
July 10, 2013, 08:47:24 PM
@franciscolobo: What I have gleaned from reading this forum somewhat comprehensively for the past few months is that, while this and other similar ideas could work theoretically, none of the cameras has a CPU (or DIGIC processor) which is powerful enough to record raw footage and simultaneously convert it into another form. It is not a question of development, there is simply not enough remaining hardware capacity to do the job. Although at the same time I must say that all the devs laughed themselves silly a year or so ago at anyone who suggested we could one day have raw video. I believe the difference is that the raw video was in a sense always there, just waiting for A1ex, g3ggo, 1%, nanomad, etc., to figure out how to grab it and write it to the SD and/or CF cards. There is no extant stream (as far as we know, but....) of raw video which has already been converted to some other, smaller form, other than the H.264 output from the dedicated chip which is the result when shooting normally (i.e., non-raw). So far, the only possibility which seems doable to the devs (but I'm not sure if anyone is still working on it) is to reduce the bit-depth of the raw video from 14-bit to 12-bit or 10-bit, which reductions would bring only 14-29% in bitrate reduction, not enough to make the difference we need, especially in the older cameras like the 550D. Cheers!
#63
@dlrpgmsvc: I believe you are half correct. H.264 is indeed YUV, subsampled at 4:2:0. But the raw video that ML now delivers is, according to a1ex himself in a tweet to Stu Maschwitz, 14-bit RGGB, that is: uncompressed, not-yet-debayered, 14-bits per pixel data straight from the sensor, with the only intermediate step being Canon's mysterious, black-box reduction (binning? line-skipping? both?) of the sensor resolution of 5184x3456 down to the Live View resolution, which varies by camera and is apparently 1728x974 for the 550D. Though it is true that the sensor array has twice as many sensels (?) for the G channel as it does for R and B, I don't think it's correct to call it either YUV or 4:2:2. IMHO, of course. Cheers!
#64
I believe "Force Left" is also NOT working on the 550D. I've tried it on three separate occasions, and each time the result is a .RAW file filled with garbage frames (looks as if it's an issue like "vsync"...there is obviously some image material there, but it is not synced to the frame width and/or height. FYI!
#65
Feature Requests / Re: raw_rec with start delay
June 18, 2013, 12:40:58 PM
@marekk: The raw_rec module "Tragic Raw" (http://www.magiclantern.fm/forum/index.php?topic=5582.msg52319;topicseen#msg52319) has a selectable start delay. Not sure to what cameras this applies. I believe "Magic Raw" used to have a selectable delay, but no longer does, again not sure why. Cheers!
#66
@xNINELiVES: The beep is timed exactly to the "shot time" of the first .DNG in the raw sequence. So you literally drag (in the timeline of your NLE) the external audio you recorded (you did start sound before picture, right?) until the beep lines up with the beginning of the first frame. Boom: synced. You don't need an in-camera sync track to match up to, as long as the sound recorder you use catches the "beep" sound of the first-frame sync beep. Of course, you have to keep track of what you shot and what you recorded, because if you shoot for awhile you might have multiple beeps on your externally-recorded sound track. Cheers!
#67
@fatpig: Don't drop the folder, use the Import browse to navigate TO THE FIRST.DNG IN THE FOLDER. This works for me. Cheers!
#68
550D with mk11174's "ML_Mem_All_Fix2" build gives 896x512/23.976fps (video mode) or 24.000fps (photo mode) as maximum continuous shooting resolution with 16x9 aspect ratio. For widescreen, this build will also shoot 1216x416/23.976fps continuously. I had Hacked Mode on the whole time, but not sure if this has any effect at all.

Tip: Download and install the trial of GoPro Cineform Studio Premium and you can view your raw clips as video in more-or-less realtime after converting them to .dng's. Pretty cool. Cheers!

EDIT: Using SanDisk ExtremePro 95MB/s 8GB SD card, formatted with exFAT and 16384KB clusters (Allocation Units). Not sure if exFAT is relevant or not. I did some pretty thorough testing of FAT32vsexFAT and Write/Read speeds seemed almost identical using Atto Disk Benchmark. Of course you NEED exFAT to record clips bigger than 4GB. I tested my setup up to over 6GB, file was fine, extracted .DNG's without a problem using RAW2DNG_Batchelor_1.1 (the newer versions gave me problems so I've stuck with this one).
#69
@Kraemer: I don't have a 5d3, but from reading the forum I wonder if you have remembered to set FPS Override to 23.976fps? Apparently if you don't do this, the camera (even though you have set it in the Canon menu to 1920x1080@24p) defaults to trying to record at 30fps (or 29.97) and this might explain why the datarate is causing you to skip frames. When you run raw2dng on the .raw file (in Windows, I'm not familiar with how it works in OSX or Linux), what do you see as the frame rate in the DOS window that opens up while raw2dng is writing out the DNG frames? You should see 23.976. Cheers!
#70
what he said...
#71
@hammermina: Format the card in Windows, you can choose exFat and also the cluster size ("Allocation Unit Size"). Then to make it bootable, run EOScard (google it). Next step is important: forget about the nightly build. Just use the link provided above in Reply #262 by MK11174. Unrar the contents and copy them as is to your SD card. This worked for me. Cheers!
#72
@mk11174: Oh, and one more thing. SanDisk ExtremePro 8GB 95MB/s, formatted with exFat and 16384KB cluster size. Have no idea if these details are relevant or useful...
#73
@mk11174: With your latest "ML_Wave_BetterMod_Raw Rec" build, I'm getting 1152x432 and also 1216x416 (both at 23.976fps) continuous, quite a step up from your/our previous best of 960x480! So 1280xsomething can't be far away and suddenly we're only uprezzing by 50% to get to 1080p rather than 100%. And all this in only a week or so, amazing.

Also noticing that with the new build I can really go up to (and over a little bit, I think) the 20MB/s mark on a continuous basis, whereas on the older builds, the limit seemed to be about 16MB/s. So there must be much better buffering and/or more memory available with your new build. Hope that doesn't merit a "duh".

My results above were in video mode, with hack on, no sound (obviously). Looking forward to more programming goodness in the coming days. Let me know if there are any tests I can/should do to help the effort. Cheers!
#74
@telecastDaveRI: As far as I know, the bitrate adjustment in ML applies to the final H.264 encoding, and wouldn't apply at all to getting raw video out of the DSLR. In other words, the data rate for raw (at 14-bit) is what it is, proportional to the dimensions of the video. The raw hack doesn't do any processing (other than cropping) of the camera-provided raw feed, so there is no place in this chain of events to apply the .7x throttle you are suggesting. Apparently other devs and assorted brainiacs are working on reducing the number of bits of raw data written to the SD card from 14 to 12 or 10. This would have a slight effect on the bitrate (either 12/14, about 15% less, or 10/14, about 29% less), and get us from the currently stable 960x408 (or 480, as mk11174 has just informed us) up to the next step or so on the ladder (1120xwhatever). So if we are willing to accept less color fidelity, we can get a slightly sharper picture. Cheers!
#75
@mk11174: Thanks a lot for your hard work. We are certainly victims of the old (Chinese?) curse: May you live in interesting times...

Cheers!