Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Shizuka

#1
Please don't actually do a SHR 2 because it introduces a small bias in values.

In the case of [aaaaaaaaaaaaaabb], you are forcing bb to be always 00 when it was 01, 10, or 11 before the shift.
These might not matter much when you're much higher than the black level, but rounding errors can throw out information for the darkest stop of data.

Two other ways to do it are:

Flat rounding - no bias nearest neighbor dithering
if bit15 is 1, add 1 to bit14

Random rounding - random dither
01 - 25% chance of adding 1 to bit14
10 - 50%
11 - 75%
#2
Quote from: a1ex on September 09, 2013, 02:01:47 PM
Can you share the temp file hack?
it's assembly patched - source would be the bindiff sorry

I would imagine the changes in source are here:
//line 145
        printf("\nInput file     : %s\n", filename);
        char tempname[4];
             tempname[0] = filename[4];
             tempname[1] = filename[5];
             tempname[2] = filename[6];
             tempname[3] = filename[7];
             char tempname_txt[8];
             char tempname_pgm[8];
             snprintf(tempname_txt,8,"%s.txt",tempname);
             snprintf(tempname_pgm,8,"%s.pgm",tempname);

        char dcraw_cmd[1000];
//line 148
        snprintf(dcraw_cmd, sizeof(dcraw_cmd), "dcraw -v -i -t 0 \"%s\" > %s", filename, tempname_txt);
        int exit_code = system(dcraw_cmd);
        CHECK(exit_code == 0, "%s", filename);
       
        unsigned int model = get_model_id(filename);
        exit_code = get_raw_info(model, &raw_info);

        CHECK(exit_code == 0, "RAW INFO INJECTION FAILED");
//line 157
        FILE* t = fopen(tempname_txt, "rb");
        CHECK(t, tempname_txt);

also replace "tmp.pgm" with tempname_pgm (line 184, 188, 189)

finally, replace the unlink() parameters at 283/284


a1ex: a suggestion: turn on sse floating point instructions - it'll give a 20-30% speedup for free on many speed critical routines. -msse -msse2 -mfpmath=sse are the compiler flags for gcc, and will produce executables that run on pentium4/amdK8 or higher
edit: maybe not. I only found two routines
#3
Quote from: zuzukasuma on September 09, 2013, 11:26:58 AM
I'm using multiple virtual machines for this job, now 4 virtual 1 real windows 7 crunching numbers to process photos***.



***for those who still wants multithread processing, you must use industrial grade harddrives or ssds for this job. temp file and final dng file at multithread kills the drive. I'm using intel 520 series, which has best price/performance/security ratio on the market.

too much work
i wrote custom utilities for this sort of thing before
http://www.mediafire.com/download/5n5pjih98b80909/MT_cr2hdr.zip
note that cr2hdr is hacked to use unique tempfiles based off input file name.
#4
pretty much the cheap way to do this is to use a multithread capable dispatcher + modify cr2hdr to use temp files named from the CR2's checksum. it's not really possible to speed up cr2hdr if dcraw is not multithread-capable (amdahl's law etc)
#5
Quote from: Danne on July 26, 2013, 05:58:53 PM
@Ilias

updated my link now containing the first dngfile from both of the examples. base-iso 1600 for both files. 6400-1600 on the dual-iso. (reversed due to converting issue, still base-iso 1600 though). Lighting and camerasettings identical for both shots.

https://docs.google.com/file/d/0B4tCJMlOYfird3RMV0l5c2c0RzQ/edit?usp=sharing

//D

Do you even gain any sort of benefit doing dual iso with recovery ISOs above 1600 (APS-C, 5D2) / 3200 (5D3+) - also known as the point where the read noise no longer decreases?

The theory suggests that 100/[200,400,800,1600] only make sense, and 100/3200 only if you've got a 5D3.
#6
Please express frame rate as a rational number: numerator/denominator instead of framerate/1000. This mitigates increasing precision errors for lower framerates.
#7
A rather relevant question is "how repeatable are these tests at the same buffer size"? A card might be rather fast when it's empty, but fill it up, and it'll slow down as it does live garbage collection. An old card I had been using for an embedded system wrote at 15MB/sec initially, which degraded to 9MB/sec after a month of use.

The old 50nm X25-M comes to mind here
#8
Quote from: a1ex on May 17, 2013, 10:53:47 PM
These long tests are really nice. Keep them coming, please!



Wow, I guess that goes to show that not all cards are created equal... the Sandisk card is a mess of inconsistency compared to the Transcend cards!
#9
Quote from: wolf on May 17, 2013, 08:24:31 PM
Still pink.
I tried

skip_right  = zoom ? 0 : 1; 

Still pink also.

C:\>dcraw -T -v -v 48660002.DNG
Loading Canon EOS 550D image from 48660002.DNG ...
Scaling with darkness 0, saturation 15000, and
multipliers 2.651050 1.000000 1.388727 1.000000

>darkness 0
this is your problem, needs to be 2048 or something close to that

dcraw.exe -k 2048 is what you need

autodetect_black_level() produces a value of 0/0 here. Hard code it to some reasonable value, like 2048.
#10
even if you do match shutter speeds, the dng doesn't have adobe's NR calibrations baked into it. you'll find that the DNG denoising is significantly weaker than a CR2 unless the DNG has metadata that identifies a (profiled) camera.
#11
Quote from: ivanatora on December 23, 2012, 10:03:29 PM
Okay, that's weird. I tried your ffmpeg command to compress video, but it resulted in file that was waay bigger than the original :) Ofcourse, I changed -s hd1080 to -s hd480 (since I'm shooting in 480p, there is no need to scale it up).
First, the aspect ration seems wrong. The output video was 852x480, while the original was 640x480, but I assume it is a matter of tunning of the -s parameter.
Second, I got awful white noise in all underxposed parts. Like someone was lit christmas decoration in every shadow :P

The filesize was a bit up, but I assume it was because of the wrong frame size:


-rw-r--r-- 1 ivanatora ivanatora 272M 2012-12-23 22:48 MVI_2355.mov
-rw-rw-rw- 1 ivanatora ivanatora 163M 2012-12-23 20:15 MVI_2355.MOV



That's because converting h264 to mjpeg isn't compression, it's decompression (or more precisely, recompression to a less efficient format).

Since you already have ffmpeg, try this:
ffmpeg -i <input file> -acodec aac -vcodec libx264 <output.mp4>

Note that "aac" and "libx264" may differ in your builds of ffmpeg. Use ffmpeg -codecs to find x264 and AAC encoder. Also note that for audio compression, neither of the two ffmpeg encoders (at least in my build: libvo_aacenc, aac) is actually decent. For a decent AAC encoder, you'll need to use neroaacenc.
#12
just use x264 and be done with it... mjpeg and xvid are outdated codecs (well, if you want to use mjpeg for fast decompression, that's another story)
#13
Feature Requests / Re: Custom File Prefixes
November 29, 2012, 08:20:20 AM
Quote from: 1% on November 28, 2012, 04:22:08 PM
Yea, A1ex's patch is much longer in the other thread. But it names brackets like B01 and the next one B02, etc.  There is an on screen keyboard and such but this works for me.

*Its in my build for now.  If you want it in ML tree, ask. Everyone was afraid to test it and thread died.

I'll test it.
#14
Feature Requests / Re: Custom File Prefixes
November 28, 2012, 09:57:34 AM
Quote from: 1% on November 28, 2012, 06:00:30 AM
This would help HDR. I always forget what goes with what.

The patch works.

https://bitbucket.org/OtherOnePercent/tragic-lantern/changeset/a98233251397bc3029b65a35b2ac27aa23e76460

Please note that I did not write this patch, so credit appropriately.

Actually I was hoping the file prefix can be set arbitrarily, ie. for example, from a file called A:\DCIMPRFX.TXT containing the four bytes "T2I_", as we don't have alphanumeric input yet.
#15
Quote from: 1% on November 28, 2012, 04:22:36 AM
I doubt they would donate. Especially the most expensive camera they have. The code is out there, if you want to do anything to 1D and have one just do it. So few people own them its not worth developing for in any kind of group effort.

We could try to port some 1D features to other cameras but what do they even have that isn't tied to hardware?

the one 1D-series feature I'd love to see come over to Magic Lantern is custom file prefixes for file organization, where the first four letters of the filename can be set arbitrarily.

No one has responded to my feature request q_q
http://www.magiclantern.fm/forum/index.php?topic=3258.msg15888#msg15888
#16
Quote from: driftwood on November 21, 2012, 04:53:07 PM
Seems to be some good work going on here. Where are we at in terms of encoder patching? I could be of some help - haven't ML'd my 5DmkII , 7D or 60D yet but you never know...

The 5DMKII, 7D 60D are h264 Profile IDC baseline level 5 which isn't a brilliant starting point... but according to the spec allows a maximum video bitrate of 135,000. Max decoded picture buffer of 110,400 at 1920x1080. (Having said that, if you look at the h264 stream / PPS there could the usual baseline 66 constraints on these three cameras)

Have you found a way of raising the decoder picture buffer yet (dpb/cpb)? Youve got to stop the buffer underuns/overruns and tests need to be done on buffer analysis / stream analaysis software (like elecard or CodecVisa, etc...)

Have any Quantisation scaling matrices been found ? Are they being employed?

With the 5DMKIII being level 5.1 High profile (thats slightly higher than the new Pany GH3 you should be able to do something. You've got adaptive 8x8/4x4 transform, scaling tables, cabac or cavlc, in-loop deb locking etc...

Here's the stock bitrates of the GH3 in ALL-I 72Mbps .mov mode.

Bit Rate = 71680000 (69999 = bit_rate_value_minus1/SchedSelIdx/*0*...)
CPB Size (the coded picture buffer) = 64512000 (62999 = cpb_size_value_minus1(SchedSelIdx/*0*...)

Intitial QP=20/ QP=20...

Actually, someone email me a very small All-I file / dropbox me. [email protected]

Hmmm... need to get hold of a MKIII.

For the non-5D3 cameras, there's nothing that really can be done:
QSM isn't available in baseline profile, and maxDPB is just a specification specifying how many reference frames can be used for video. For canon's encoder, it only uses the previous frame to be predicted from, ie. 1 reference frame.
#17
Quote from: 1% on November 08, 2012, 01:07:49 AM
Is the mirror actually connected to the shutter or can this behavior be patched out?

Think it's part of the shooting mechanism in the lower end cameras. Not sure at actual answer.
#18
Quote from: Digital Corpus on November 07, 2012, 02:29:47 PM
Aside form line skipping, there is pixel binning. Effective pixel count of the sensors is 5184x3456. However, DXO reports 5360x3515 for the 7D and DP Review states that the T2i's sensor is just a little different than the 7D's. Despite the same effective resolution, it is possible that the skipping and binning is a little different between the two.

On a similar note, according to Cambridge in Color, the aperture range where we may start to see loss in resolution is f/19 to ~f/28. However, our sensor is not 2.1 MP so there may be some interesting results as we hit that range. This is next on my list of things to test which are currently

1) Visually perceptible difference between the 7D w/ & w/o the VAF-7D
1.1) Pixel peeping difference as well
2) Tests to try and establish the pattern for which Canon skips rows of pixels and bins them together to help more finely establish resolution information
2.1) Establish the same baseline for 720p recording, which currently suffers from horizontal scaling artifacts
3) Test theoretical diffraction limits of 2 and see how the image holds up.

The 7D sensor resolution figure is the number of addressable pixels on the array. Doesn't exclude the masked (always black) pixels. See http://lclevy.free.fr/cr2/#app for a table of sensor sizes - but note that the masking values are the Canon values, which exclude extreme edge pixels. More pixels can be extracted from the sensor dump. I have a modified version of dcraw available that can reveal these edge pixels.

I took a look anyway (it was a two-byte change) at the 7D vs. the T2i. The difference is that the 7D's sensor has 16 additional black mask pixels on the left side, and 16 additional total pixels. Imaging area remains the same.

I think there is horizontal line skipping and vertical pixel binning from my experiments. More interestingly, I saw vertical and horizontal stepping artifacts in the 1720x974 YUV dump from my T2i, suggesting it's possible that the movie recording YUV buffer is actually upscaled from a lower resolution.
#19
Quote from: ideimos on November 06, 2012, 10:24:42 AMIs this beacuse of the encoder Canon uses? settings?

Canon's encoder uses the bare minimum subset of H.264 in order to say they use H.264. No B-frames, no CABAC compression, only one reference frame... and worst of all, no adaptive quantization, which spends bits in areas that needs them the most.
#20
Archived porting threads / Re: First 7D alpha released!
November 05, 2012, 03:19:36 PM
Quote from: a1ex on November 05, 2012, 03:04:26 PM
Well... I'm pretty sure the CPU power needed to do this will negate any improvement and could cause jitter issues or maybe worse.

But could this also be used to remove hot pixels in video?
#21
Got 422s from my T2i. Turns out I was using the wrong values for FullHD live view image resolution, but it's the same idea.

1720x974 and 1056x704 - not scaled at all :)
1024x680 - scaled! (5x / 10x zoom modes) - massive bilinear resize artifacting - don't use this mode!

--

Now with a 1720x974 figure, we can take a look at how Canon does it:
5202 x 3465 sensor -> 5160 x 2922 16:9 sensor -> 5160 x 974 every-3-horizontal-lineskipped sensor
This is assuming Canon actually does full reads for each line, instead of skipping every three pixels...

I'm not sure whether the demosaic engine does vertical line skipping. If it does:
5202 x 3465 sensor -> 5160 x 2922 16:9 sensor -> 1720 x 974 every-3-horiz+vert-lineskipped sensor
That's a very familiar resolution, don't you think?
We would also see massive moire / aliasing in the vertical direction. I didn't see this, so I think Canon isn't doing vertical line skipping.

My guess is Canon thought they might have to do H+V line skipping, but did not end up needing to.

Also, ML 2.2 had FullHD silent picture. Was this feature removed in ML 2.3, or is there some other way to access it without actually recording a movie?
#22
Quote from: 1% on November 04, 2012, 06:40:19 PM
You can look at raw 4:2:2 dumps of what the input to the encoder is. The "while recording" sizes are exactly what is being encoded.

Just drop a .422 while recording, or in my case any time in video mode. Available image from that log is full sensor size so we can probably chose the line skips. Might be worth comparing 422 at zoomed out, 3x, 5x (much smaller), 10x (very tiny), etc.

I have a T2i, but I'll take a look at it. I noticed rescaling artifacts in 1056x704 modes, but that's the only size I've taken silent pics in.

Full image pipeline, as interpreted by me:

[Sensor] --lineskip--> [LineSkip RAW] --demosaic--> ?DIGIC image? --DIGICprocessing--> {YUV422}
This is used for live view
{YUV422} --video_rescaler--> {YUV422-to-H264} --h264_encode--> (H.264)

DIGICprocessing - sharpness, color, contrast, curves (Standard, cinestyle, flaat_10, etc.), rescaling
[raw data], ?unknown data?, {yuv data}, (compressed data)
#23
Quote from: 1% on November 04, 2012, 04:26:56 PMLooks like a real good interpretation on why they picked the sizes they did. So crop mode really has that little chroma resolution, wow. But does that mean the numbers won't work out to scale it to something else? Non crop HD is even more skips and a lower resolution. Looking in the logs, it says the available area is bigger, close to that 2000 size but non at 396 or 487. But also the window being used is smaller.

[GMT][WAKU] Avail WinW:H(1036:1036)->(1036:1036)
[GMT][WAKU] Avail X:Y(2920:1268)->(2920:1268)


You kids are so pampered; back in the NTSC days, we had thirty lines of color information!

The human eye is terrible at picking out color resolution, so bayer sensors take advantage of that weakness. I wouldn't expect the "low" chroma resolution to have any visible impact. (Worst case scenario: sharp edged highly saturated objects - see footnote)

The 396/487 figures (which are estimates) are post-demosaic "chroma resolution" values, which are estimated by reducing the source resolution by two due to the assumption that a 2x2 superpixel is required to get color information for a group of pixels. The reality is somewhat murkier (think PenTile displays) - demosaicing inherently blurs in the vertical and horizontal direction. Regardless of the actual resolution, the fact that demosaicing reaches above and below the current line means that the reduced chroma resolution of 4:2:0 is probably more appropriate than 4:1:1 or 4:2:2.

TLDR: you won't find these values in the firmware because they are an estimate of the resolution; the doubled values may be lurking around though.

{footnote}
I'll use an example of 4:2:0 chroma resolution artifacts here, although it's not completely relevant. (What is relevant is a live view 4:2:2 dump - and seeing how Canon deals with chroma there)
http://upload.g3gg0.de/pub_files/ec9400781e1b25f5ff9df7cde66b96a7/mpeg_frame_1.PNG - see the red color blocking at the mop on the floor? This artifact is produced by nearest neighbor upscaling of the U and V planes to the Y plane's size. A filter to interpolate it goes a long way to reducing these artifacts. :)

Of course, the real question is: how does the Canon demosaic the sensor to 4:2:2? The cynic in me thinks that since it's going to be rescaled to video resolution anyway, there's not much reason to do a good job during the demosaic. Determining actual chroma resolution from 422 files is difficult: Canon does image processing *before* the Live View image is provided to us. I looked, I saw sharpening and scaling artifacts.
#24
Is there any reason that 422 JPEG recording makes sense?

Let's look at Canon's 1056x704 live view:
This resolution is acquired at the sensor by line skipping: every five rows are read from a sensor area of 5202x3465 (3:2).
The resulting 5202x693 RAW is demosaiced and rescaled to 1056x704 4:2:2 for live view.
A 5202x693 bayer sensor has a chroma resolution of 2601x396.
The final resolution is the minimum of every step above: 1056x693 luma resolution, and 528x396 chroma resolution. Vertical chroma resolution is limited by bayer line skipping.

5202x693 RAW Bayer -> 5202x693 Y / 2601x396 UV [assuming 2x2 superpixel required for chroma determination]
1056x704 422 YUV -> 1056x704 Y / 528x704 UV

We can also look at the 1728x792 recording live view:
This resolution is acquired at the sensor by line skipping: every three rows are read from a sensor area of 5202x2925 (16:9).
The resulting image, 5202x975, is demosaiced and rescaled to 1728x792.
A 5202x975 bayer sensor has a chroma resolution of 2601x487.
The final resolution is the minimum of every step above: 1728x792 luma resolution, and 864x487 chroma resolution. Again, line skipping limits our vertical chroma resolution.

5202x975 RAW Bayer -> 5202x975 Y / 2601x487 UV [assuming 2x2 superpixel required for chroma determination]
1728x792 422 YUV -> 1728x792 Y / 864x792 UV

If the video was saved as a 1728x792 4:2:0 H.264 video, we would actually lose vertical chroma resolution. But Canon upscales this to 1920x1080 before encoding it:
A 1920x1080 4:2:0 video frame contains a 1920x1080 Y (luma) and 960x540 UV (chroma) planes.

The final resolution available is 1728x792 Y / 864x487 UV, noticing that these values fit comfortably within the resolution limits in 1920x1080 4:2:0 video. Canon's 1080p is large enough to capture all of the detail in 1920x1080 4:2:0 as provided in the 1728x792 4:2:2 (since there are only ~480 real lines of color information).
#25
Quote from: P337 on November 04, 2012, 09:30:04 AM
aww :( are you sure it's that bad?  Either way this is the best JPEG could offer so I'm happy for that and if that's still not good enough there's also a full 422 option ;)
 
I may have miss-termed "motion artifacts", what I meant was prediction errors as these are all full frames like an intra codec.


Not sure what kind of prediction errors you are talking about. Predicted frames ("P frames") can also encode stationary information as intra blocks if motion can't be detected, ie. noise. Given the same amount of bits to work with, it's better to give the encoder the flexibility of choosing what's moved and what's stationary. Motion predicted blocks aren't [as] vulnerable to blocking artifacts (or the evil counterpart, deblocking) since they don't necessarily lie on block boundaries.

All-I is really meant for ease of editing (ie. seeking and splitting). You don't require an intermediate file for frame-exact splits with All-I files.