Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Yoshiyuki Blade

#76
Looks like there's some crazy experiments going on here! These developments aren't trickling to the the nightly builds yet are they? I'll wait (im)patiently to see what my camera lens can show with your software. :)
#77
As someone mentioned earlier in the thread, if you set the picture quality to JPEG instead of RAW in the default camera menu, you may get some more frames out of it. That's how I went from 36 frames to 51 with the 5D2. Not sure if it works on the 6D too, but it's worth a try!
#78
Quote from: kgv5 on May 03, 2013, 07:01:04 PM

Yoshiyuki Blade, I assume that this   is from 5d2 (right?) so how many frames per burst did you captured ?? Was your movie made from 1080p or 720p clips. What card did you use with camera?

Each segment is about 51 frames, so a little over 2 seconds @ 24 fps. They were made from the raw DNGs so they were originally 1880x1250. I stretched it horizontallly to 1920 and cropped it to 1080. The card is a Sandisk Extreme Pro 16GB 90MB/s, though I doubt that matters much in raw burst mode. It seems that all the frames are captured in memory, then it takes its time writing to the card. Real-time RAW capture seems WAY off in that sense. It takes maybe 20x longer to write all the frames.

Also, my public sharing limit for dropbox was reached again, haha. I guess I can try hosting it somewhere else, like Mega if you guys think it's a good idea. I'm not a fan of uploading to a video service (youtube, vimeo, etc.) since they recompress the files.

Quote from: sicetime on May 03, 2013, 07:50:18 PM
OH MY GOD THAT LOOKS INCREDIBLE!!!!!!!!!!

What amazing work, seriously breathtaking.

second that on asking what settings you used Yoshiyuki

I simply opened the DNGs in photoshop, did the click-to-white-balance on the reference you saw at the end and boosted the saturation a tiny bit. There were also some default options (brightness, contrast, sharpening, color noise reduction) which I left on. I'm not a pro at color grading or post processing so I left a lot of stuff alone.
#79
I don't think it's a good idea to be too obsessed over the frame rate to comply with "cinema standards." It can be a slippery slope. I reckon a true cinema also uses a 180 degree shutter, has full chroma resolution, jello-less picture, etc. Yet, these challenges are difficult/impossible to overcome using a "mere" DSLR for filming. I think at some point you'll have to come to terms with your technological limitations and work within those limits. Asking for exactly 24 fps, by way of skewing audio, is probably the easiest thing that you can do yourself.  :) Why not just comply with digital video standards and keep it as 23.976? That's probably where most ~24 fps content ends up anyway (DVD, Blu-ray, etc.).
#80
I think an easier solution is just to record normally at 23.976 (or 24000/1001 to be exact), assume exactly 24 fps in the software, and speed up the audio proportionally. I bet the change in audio pitch will be imperceptible.
#81
There was some breezy weather a couple days ago so I took some shots of tree leaves swaying around as well as other things with high frequency detail (which would fall apart quickly if the bitrate isn't high enough). I'm still constantly amazed by how much better the quality is! This is the way it was meant to be recorded! The fact that my computer can't even decode it in real-time at lossless 4:2:0 shows that the complexity is quite high.

The lossless 4:4:4 file was nearly 2 GB so I won't upload it (my free dropbox account is limited to 2GB of bandwidth daily), so I arbitrarily compressed it to BD specs. I also had an IT8 target for my flatbed scanner in there. I didn't use it to help with the video or anything like that.
https://dl.dropboxusercontent.com/u/63799907/test%20BD.mkv
#82
Modules Development / Re: 14bit RAW DNG silent pics!
April 30, 2013, 02:40:57 AM
Recorded about 10s of random footage. Applied WB, converted to 8-bit tiff in photoshop's DNG converter, (slightly) stretched and cropped for 1080p and encoded it losslessly. No denoising or sharpening applied.

The 8-bit 4:2:0 is 279 MB while 4:4:4 is 508 MB lol. As expected, the visual difference isn't huge unless I find some bright vivid reds. I'll edit this post later with a link of both clips for you guys to compare.
Here are the lossless encodes for comparison:

Edit: (Links removed due to excessive dropbox traffic causing suspension. :o)

Raw is definitely way overkill for any kind of practical recording, but will there be a possibility for a compromise? The improvement in resolution alone makes it worthwhile, but what happens when the image goes from the raw buffer to YUV 4:2:2 that kills the detail so much? You'll see that even bringing a raw image down to 4:2:0 won't affect the image quality that badly. Is there some kind of filtering applied to increase compressibility at the cost of detail?

Quote from: Kabuto1138 on April 29, 2013, 08:26:19 PM
Just got my 5d2 to get 51 frames when taking out the raw+jpeg still function.

Wow, same here. I had it on raw and was getting 36 frames, but now I get 51 too when I set it to JPEG. Doesn't seem to affect quality in any way. I wonder what happened? :D
#83
Modules Development / Re: 14bit RAW DNG silent pics!
April 29, 2013, 06:08:04 AM
Wow, even if it's only 1.5s @ 24 fps, getting a glimpse of the camera's absolute best quality in real-time is incredible! Can anything out there capture these frames at 105 MB/s continuously? :P

Gonna play around with this a little more!

edit: Tested a clip at lossless 4:4:4 8-bit, but my computer couldn't decode it in real-time lol. As expected, there's not a much of a noticeable difference in real life footage.
#84
Here's an output I made at 4:2:2, PC levels (0-255)
....

The image will probably look darker than it should because the levels aren't rescaled to TV (16-235), similar to how a clip from a Canon camera would look unprocessed.

Used these settings from MeGUI: --preset veryslow --crf 0 --colorprim bt709 --transfer bt709 --colormatrix bt709 --output-csp i422

Here's a "standard" 4:2:0 at TV levels. it should look as intended:
....

Edit: (Links removed due to excessive dropbox traffic causing suspension. :o)

#85
Quote from: g3gg0 on April 27, 2013, 12:07:12 AM
here some example video:
https://docs.google.com/file/d/0BwQ2MOkAZTFHdU1tR1pITXFVVXM/edit?usp=sharing
(not sure how to make it look better and not take 600MiB)

You can try compressing with x264. I compressed the 45 frames in the other zip file and it came out to about 25MB lossless.
#86
Modules Development / Re: 14bit RAW DNG silent pics!
April 27, 2013, 12:48:27 AM
Quote from: 1% on April 27, 2013, 12:07:30 AM
Simple, throwing away color to 4:2:0 and repeated resizing. 4MB of data to less than 1

I don't think chroma sub-sampling is a factor in this case, unless Canon's subsampling method is so awful that it affects the luma channel :D. Looking at the stills in 4:0:0 (greyscale), there's a definite difference in the luma resolution. Is there anything big happeneing between the point where the raw DNGs are being captured to the point where 4:2:2 raw-silent-pics-of-old were recorded? Other than chroma sub-sampling, which has a very minimal perceptual impact, the difference in resolved detail is still quite clear.
#87
Modules Development / Re: 14bit RAW DNG silent pics!
April 26, 2013, 11:34:20 PM
I must say that the quality is much, much better than the original silent pic. Before, the difference in actual resolved detail between video (at Q-16) and the raw silent pic was virtually nonexistent. Sure there was a quality difference with the absence of H.264 compression artifacts, but it didn't really increase resolution by much. The raw DNG definitely captures more detail. I can read smaller text that's very blurry in video more clearly. For kicks, I compared it to a "reference" full resolution 21MP still image downscaled to 1920x1080 and although it obviously can't compare (still has aliasing issues among other artifacts), it's still pretty darn good and has similar resolution qualities.

The only problem atm is that there's no way to properly apply a picture style to the raw DNG. Video and the old silent pic basically had the picture styles "baked in," but that's not the case for raw DNGs now. So although I can still get a rough idea on the resolution improvement, the colors are way off and won't be easy to adjust.

It makes me wonder what else utterly destroys the video quality when it goes from the raw data to the final output. As I've mentioned before, the old silent pic didn't look much different from normal video, save for the lack of compression artifacts. Being 4:2:2 and uncomrpessed didn't make as a huge perceptual difference as the raw DNG does now.
#88
Modules Development / Re: 14bit RAW DNG silent pics!
April 26, 2013, 09:53:31 PM
Great work guys! I tried it on the 5D2 and cropped off 160px on the left-hand side and 18 at the top. The ratio appears to be 4:3, 1880x1410. I haven't tested this yet, but will the resolution be different when taking silent pics while recording (like the old way was)? edit: Just checked, it doesn't so this is the theoretical max I assume. edit2: Oops, I didn't notice the big white border at the bottom. It's closer to 3:2 now, at 1880x1248.
#89
Though this may be known already, I've found that FPS override with audio recorded separately allows for high bitrates. I don't know how high, but its much higher than default; I can maintain a constant Q-16 while recording indoors without a hitch, but the bitrate indoors rarely exceeds 100 Mbps, which is below what my card can handle.

Of course, the issue is with the sound sync. I still have no idea how to systematically sync the AV back together. I think it's mainly a problem with the true framerate that FPS override records at, which doesn't seem to be the FPS that it says it records at. Once I can figure that out, it should be cake.
#90
One thing about what the guy in the video mentioned that bothered me is that he somehow made a connection between a a bayer filter and the various types of chroma subsampling of the YUV color space (technically YCbCr). I noticed someone else make that connection in another video too so I don't know whether I'm misunderstanding something,  whether they're oversimplifying things or even misunderstanding it themselves.

Afaik, Bayer filters capture individual RGB colors and interpolate them since each pixel doesn't have all 3 color channels dedicated to it. Well, it's also called RGBG or some different order depending on the pattern because there are 2 green pixels for every 1 red and 1 blue in a 2x2 pixel block. Anyway, the raw image from the sensor can be "demosaiced" internally or via a RAW photo editor and can be encoded in RGB. I don't see how this has anything to do with 4:4:4, 4:2:2 and 4:2:0; that's from a completely different color space. Unlike RGB, where each pixel has its own RGB channel, each with its own luma level, YUV has 1 channel dedicated to luma and 2 channels dedicated to chroma, which makes it much more efficient. It also allows images to have a lower chroma resolution than luma, exploiting the human eye's weaker sensitivity to chroma resolution over luma resolution. Something like that doesn't happen in RGB. They're also different enough that they can't be losslessly converted between one another. A full 8-bit RGB picture can be converted to YUV 4:4:4 and back and look very close, but there will be rounding errors.
#91
Yeah, recording without audio is the key factor in increasing video bitrate tremendously. As a matter of fact, when I fool around with camera settings and record indoors, I can just keep the quality at constant Q-16 and it won't go anywhere near my card's limit (~120 Mbps). It does exceed that limit outdoors though. There's just too much high frequency detail to maintain Q-16 with my card.

With audio on, I can't really exceed 60 Mbps. But with audio off, indoor bitrates typically range from 70-90 Mbps so it makes a sizable difference. With this much bitrate at our disposal, I wonder how much more quality can be achieved with mjpeg. Could it exceed what Q-16 of the H.264 encoder can do at similar/lower bitrates? I'm eager to see how things pan out.
#92
It could be that the audio.avs script doesn't call the required plugin (FFMS2) by itself. This is all it says:

SetMemoryMax(1024)
FFAudioSource("..\RAW.MOV")


I think that, unless you have FFMS2 installed in Avisynth's plugin directory (which loads them automatically), you need to load it yourself. Something like this:

SetMemoryMax(1024)
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\ffms2.dll")
Import(ScriptDir()+"..\Avisynth-plugins\FFMS2.avsi")
FFAudioSource("..\RAW.MOV")


I'm not sure if that'll help though, I'd be surprised if this is the solution and somehow went unnoticed up until now.
#93
Right, I've been saying that levels should be crushed in terms of the "final delivery" as a standard practice:

QuoteHowever, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

By "end of chain," that's basically what I meant. With an automated process like the HDR script, the levels should be at TV range at the very end of the chain as opposed to the very beginning (the way it is currently).

I agree with every thing you say too. Everything up to the final delivery should be left alone to have more information to process. However, the automated HDR script doesn't really leave room for that. If it were up to me (and this is what I usually do), I stop at the point where enfuse merges all the frames, then I do other various adjustments in photoshop before loading it back to video and doing further adjustments. It depends on the program used really. Most AVISynth filters work in YV12 so you'll have to make adjustments before you convert to RGB or after you enfuse and convert to YV12 again.

I think we've just been misunderstanding each other, but I've been speaking in the context of the HDR workflow script that's currently available. A more advanced workflow is beyond the scope of what this script offers, as you know.
#94
Quote from: y3llow on January 15, 2013, 03:20:31 PM
Well I'd disagree, would you mind explaining why in the case of this merge script and this source you think they must?

Perhaps try it, take a Canon MOV which had an ITU BT601 color matrix used and do a PC.601 conversion to RGB and then try with a coloryuv(levels="pc->tv") first and then rec601 conversion to RGB both in AVIsynth and look at the histograms. :-) Which do you prefer?

Ok so the first example,

convertorgb24(matrix="pc.601")

will keep the full detail and will look similar to a RAW still image with the same settings converted in DPP. However, this is a special condition to have full range video in the YV12 space. IMO, this should only be an intermediate step before reaching the final output.

The 2nd example,

coloryuv(levels="pc->tv")
convertorgb24() #by default, it assumes rec.601 so it doesn't have to be specified

crushes the levels to TV range first then scales it back to 0-255 when converted to RGB. Even though the histogram is bumpy now, this is pretty much what I'd expect from typical video.

I prefer keeping things in full range (option #1) simply because more quality is preserved. However, by convention, the YV12 color space assumes the TV range, so most players decode video under that assumption. It's possible to keep the full range in YV12 and set some flags for it, but like you said video players have to honor it. The generally they don't. If you upload an unedited MOV file to youtube, it'll just clip off that extra luma range. I don't think full range is supported in the Blu-ray specs either. For the final output, converting to the standard TV range is the safest bet to get the video looking as intended for most people.

I think the general process should flow like this:

Decode the video and do Interframe interpolation
convertorgb24(matrix="pc.601")
Extract frames, enfuse, re-import to video
converttoyv12(matrix="rec709")
Do any other misc filtering in the YV12 space
encode

Converting back to YV12 will crush the levels, but will fit the standards of HD video. A properly functioning video player should decode the video with its intended look. If you do this instead:

converttoyv12(matrix="pc.709")

The full range is kept, but most video players will simply clip the extra range upon playback, which causes crushed shadows and highlights. In fact, if you open a .MOV file straight from the camera, that's what it will look like because its stored pretty much the exact same way (but with rec.601 coefficients). Specific video player configurations would be required to benefit from using full rage.
#95
I think we are in full agreement that converting to RGB and back to YV12, before Interframe is called, is unnecessary. However, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

Quote from: y3llow on January 15, 2013, 09:44:50 AM
Rescaling luma at 8bit int and if done poorly will intoduce quantization error in the RGB, evidence of that would be a combed spiky histogram when viewing the image frame output.
That's normal, yeah? Since almost all standard video under the sun are TV range, they must be scaled back to full range upon playback, which introduces those combs on the histogram. I can open up a random blu-ray stream, take a screenshot and paste it into photoshop. There will be little spikes across the histogram.

Quote from: y3llow on January 15, 2013, 09:44:50 AM
Its not necessary to rescale luma at all for the purposes of merging.

I'd suggest do all the interframe stuff then use ConvertToRGB(matrix="PC. 601") for T2i's etc PC.709 for 5D's etc best use mediaingo to establish which, that gives YCC levels in RGB.
Yeah, in this thread, I removed the little back-and-forth conversion and it works just fine.

Quote from: y3llow on January 15, 2013, 09:44:50 AM
There's no reason to do a color space conversion and loose quality to scale luma it can be done in YV12 with a PC to TV type levels adjustment.
PC to TV adjustments do virtually the same thing and crush the levels too, at least in AVIsynth. The filter coloryuv(levels="pc->tv") crushes the levels to TV range without dealing with an RGB conversion, but it's not aware of what matrix is used. All-in-all though, as we established, this step is totally unnecessary anyway.
#96
Quote from: y3llow on January 14, 2013, 11:00:05 PM
Also the hdr-split script still has a totally unnecessary ConvertToRGB then convert to YV12, two color space conversions at 8bit integer that are pointless, the original MOV's are already YV12, that's what FFVideoSource hands to the script anyway.

I think the author's intent with that was to "crush" the levels to TV range, which is standard for YV12. By default, the video recorded by the Canon cameras are at full range, but in the YV12 space. But yeah, I mentioned in another thread that it's unnecessary. The Interframe filter will work fine without having to crush the levels first, so that step can be saved for last.
#97
I made these changes to hdr_split.avs, but I'm not sure how it all ties into the entire workflow. I don't use the entire automated process and usually stop after the "C" frames are produced, so use it with caution.

SetMemoryMax(1024)
Import(ScriptDir()+"..\Avisynth-plugins\InterFrame.avsi")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\ffms2.dll")
Import(ScriptDir()+"..\Avisynth-plugins\FFMS2.avsi")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\RemoveGrainSSE3.dll")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\mvtools2.dll")

A = FFVideoSource("..\RAW.MOV")
A = selecteven(A)             # select even or odd frames and interpolate them
A = assumefps(A, 12000, 1001)
A = InterFrame(A, NewNum=24000, NewDen=1001, GPU=false, FlowPath=ScriptDir()+"..\Avisynth-plugins\")
A = trim(A, 1, 0)
A = ConvertToRGB(A, matrix="pc.601")
A = ImageWriter(A, "..\frames\A", type = "jpg")

B = FFVideoSource("..\RAW.MOV")
B = selectodd(B)              # select even or odd frames and interpolate them
B = assumefps(B, 12000, 1001)
B = InterFrame(B, NewNum=24000, NewDen=1001, GPU=false, FlowPath=ScriptDir()+"..\Avisynth-plugins\")
B = ConvertToRGB(B, matrix="pc.601")
B = ImageWriter(B, "..\frames\B", type = "jpg")

return Interleave(A,B)


Looking at this had me thinking of a few things, like outputting to 16 bits and doing manual adjustments with photoshop from there. The final result is usually pretty flat, so further adjustments will just destroy what little bits there are in 8 bit. That's a little bit more advanced though. It also had me wondering if there are other ways to simulate HDR, like capturing at 30 fps, merging to 15 fps, then interpolating to 24 fps. Interframe has also been updated quite frequently, so I wonder if it does things better or not. This will be fun to experiment.
#98
1) The range compression happens because it's the standard for almost all the video you see. Upon playback with a video player, you shouldn't see gray blacks and gray whites; it should scale to the full range automatically and look as intended. As a matter of fact, if you play the original MOV files, the shadows and highlights will actually be clipped since the player assumes TV range. That little script compresses the range for you. Yeah, it sucks that we aren't utilizing the full range all the time, but that's just how things are atm, along with having other junk like interlacing and chroma subsampling. :) The latter is understandable since it saves a little bit of bandwidth, but video game footage tends to suffer most because of it.

I'm going to investigate this a little later, but I'm curious as to why the script does this operation *before* it enfuses everything. Unless Interframe only operates in the TV range, this operation should be saved for last.

edit: Interframe seems to operate just fine without having to "pre-crush" the levels. I think it could use an update:

1) Decode the video
2) Separate the odd and even frames
3) Double the FPS with Interframe
4) Convert to RGB with rec.601 coefficients (or rec.709, depending on which camera is used I guess)
5) Extract the frames as image files (by default its jpg, but you can change it to tif or something for a lossless workflow)
6) Enfuse frames
7) Return the image files as frames
8) Convert back to YV12 with rec.709 coefficients

I think this would make better use of the available data.
#99
General Chat / Re: LQ video compression in Canon DSLRs
January 12, 2013, 09:08:12 PM
This is a bit of conjecture, but I think with the flag on, it should play back correctly as long as the player accounts for it (no clipping). With it off, the player will assume TV range and clip everything outside the 16-235 range.

I can probably test this with x264 to find out how it works once and for all.

Edit: I have to set --input-range pc and --range tv for proper playback. Everything will be mapped correctly that way, but it'll behave as if the luma range was crushed and stretched, so there's information loss. You can easily see the gaps on the histogram where the range is being stretched.

Edit2: MPC-HC with the Haali renderer can output the full range with no information loss if you specify it manually. The histogram shows a smooth curve. The downside there is that you have to specify it manually.

Quite frankly, outside of editing, it's a good idea to "compress" the levels to TV range at the end since most players assume TV range.
#100
General Chat / Re: LQ video compression in Canon DSLRs
January 11, 2013, 11:07:50 AM
So they finally use the appropriate matrix now eh? The rule of thumb I read about a while back was to use rec.601 for SD resolutions and rec.709 for HD. The weirdest thing about these older video modes is that they are recorded with rec.601 and flagged for rec.709 on playback, making the colors look even more off than usual. It's particularly noticeable in the reds, which come out more orange than they should. It's all easily remedied in post though.