Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - KMikhail

#51
Yup, I checked everything.

I kept GLOBAL DRAW ON for focus peaking and framing. Writing is around 85mb/s, starting out with red/yellow and if it doesn't stop due to skipped frames in a few seconds, it will continue beyond 20+ minutes, 120+ GB. However, with failed/re-queued frames. I need sound, so it is mlv only, 10Apr revision.

I found throughout the forum, that the fastest 32GB Lexar cards can punch through 100mb/s, and 64GB Komputerbay on average can do almost the same. Thus, hoped to get into 90mb/s range, since it was designated with 155mb/s (I understand the caveat with KB cards). Just need to make a decision regarding that particular card.

My ultimate hope was for 1080P 30fps, albeit in spanning mode, but, apparently, card struggles with 1080P 24fps and spanning doesn't help a whole lot.
#52
Well, I got one - 1066x 128GB (really prefer 128GB for documentaries).

My stats for movie mode @ 1080P 24FPS:
85mb writing with not super consistent beginning (sometimes it skips frames, FAILED 150,  queued 1 as of testing it right now).

I really hoped for more, it gives me 115mb/s and 150mb/s in playback mode.

Debating if I should return it for a new one.

Any suggestions/ideas mates?
#53
Thanks! And it is a bit disappointing, since express card has 1x PCIe, so it is fast enough for almost whatever.

Though, this one would require reinsertion, since it has external part. Then it is not really better than USB3, indeed.

Mikhail
#54
Guys,

My laptop has SD card slot, but lacks a CF one, which is kind of bummer, since it means I have to insert / carry the USB version around. Did anyone see a high speed Express card 54 - CF reader that would get completely recessed into the expansion slot? So it wouldn't be a problem during laptop transportation.

Thanks & Regards,
Mikhail
#55
Quote from: Audionut on May 30, 2013, 06:05:32 AM
From what I can gather, he's running the tests on full SSD's, and then again with 25% unpartitioned to simulate free space.

So, he's not getting performance from just having a partition of 75% of capacity.  He's getting performance increase in those graphs because he is otherwise filling the SSD's to capacity while having some unpartitioned space there for wear leveling.

...and here is the catch - as you have heard some cards drop their performance significantly and it's not necessarily recoverable. TRIM is available on SSD with advanced controllers and I am not sure about CF cards. He ran tests for a fair amount of time/data read and written. Plus, frames get dropped closer to the moment when the card is full anyways. Garbage collection takes a pretty long time and again - on SSDs.

Overall it is a food for thought, nothing solid as we don't know for sure what's going in the CF's brains.
#56
Quote from: rm312 on May 29, 2013, 10:12:07 PM
just read through all of this and very interested in where its going. Would this help cut out a few of the components? female CF - IDE extension

http://www.ebay.co.uk/itm/New-20-cm-CF-Female-to-IDE-Male-Adapter-Extension-Cable-/300572249445

What's strange is that apparently cable width is the same as the width of IDE and CF connectors, and 'CF' connector looks more like an IDE one.
#57
Quote from: Audionut on May 30, 2013, 01:36:34 AM
I was aware of the longevity of SSD's with smaller partitions due to wear leveling.  And I've also seen countless times the recommendations of not filling SSD's to capacity, as their performance reduces dramatically when close to full.

But I can't recall the performance increasing from simply having the partition at 75% or so.  If that is what you were implying.

http://www.anandtech.com/show/6884/crucial-micron-m500-review-960gb-480gb-240gb-120gb/3

Click around tables.
Cheers.
#59
Interesting thread, here's my 2cents:

For simplicity lets consider CF/SSD in a very straightforward way: linear space of multi-kbyte pages. When any information has to be read/written the whole page is read/written. The writing operation is pretty costly time wise and wear wise. This is why we observe:

1) when writing in small blocks every block actually takes the same time to get written as it would be a page sized. That's why writing speed grows so fast with doubling block size till it saturates.
2) when the block to be written crosses two pages - both pages will have to be re-written, that's why it slows down gains of speed before the saturation, otherwise it would be 2x 2x 2x 1x...

Thus, when we flush really big buffer, we actually mitigate unaligned first and last blocks (only partly written pages). However, if controller isn't smart enough every attempt to write inner block crossing the boundary of the page it will result in a double writing. It is debatable, how it is actually processed at this point.

What we want is to align BOTH clusters to boundary of pages AND set the cluster's size to the  page size. Page size is, in fact, pretty large, definitely bigger than what we are used to set on hdds/ssds. North of 16kb.

I don't know if they should be aligned to 0, but if we're talking about guess work - chances to find proper alignment with something like 1024 bytes are small.

This is why we see a performance spread over 3 parameters: cluster's size, cluster's bias, buffer's size - they all interplay.

What would be even more interesting is if ML would have a destructive write test, which will perform writing test across these three parameters, not just the last.

My guess, is that the writing speed from manufacturer's datasheets is basically (page size) / (writing cycle required to re-write it). May be perfectly formatted and buffered CF can get us closer to these values?

Point #2: Tests with SSD having partitioned only to 75% show higher performance. Sometimes SIGNIFICANTLY higher performance - getting partition smaller may resolve the issue of the speed in the beginning of the writing and when CF is getting close to get full (I heard about this).
#60
Quote from: IliasG on May 28, 2013, 10:17:01 PM
I suppose you talk generally and assuming the available computation resources are enough, not like what ML has currently available ..

Just aligning to the topmost channel is in many cases ineffective as even a single specular highlight drives all channels to align at the White Level cutoff point.

1) Generally, yes, aligning to the right might be not possible on the embedded cpu, though, could be done cleverly.
2) Nonetheless, that's the best way in general.
#61
Raw Video / Re: 1920x1080 RAW - real resolution?
May 28, 2013, 08:34:57 PM
glnf is correct.

Bayer pattern has about 80% effective resolution for B/W (like charts) and 50% for R/B. G is in the middle. This is what we see.

Full frame video RAW looks a bit worse than a 1:1 due to the facts

a) it likely uses primitive binning
b) AA filter atop of the sensor is too weak for the 1920x1280 over the 36x24mm, so it doesn't help with moire removal.
#62
Quote from: IliasG on May 28, 2013, 07:58:49 PM
...
The best way to use 10 bit space is to calculate the topmost value for each channel of the RAW, record it elsewhere and align all channels to the 11111111111111b. Likely after applying 14 bit LUT for highlights compression, then cleverly getting rid of last 4 bits. In the post, upon conversion to a normal DNG (which in fact might have all information in the lower of 14 bits after shiftin channels back and appying inverse LUT!) opposite operations should be performed.
#63
Quote from: 1% on May 28, 2013, 12:44:51 AM
2 different processes. LV raw/photo raw
We don't know how to do that yet.

You probably saw my tests of video raw vs. photo raw. Binning definitely averages several pixels, having noise closer to resized 22 mp raw. I, personally, don't consider 1:1 mode valuable for me: I want to use full sensor and full coverage of my glass, that's what makes 5D3 standing out of everything else,  including DR, according to my test (1-2 stops better noise in blacks than 1:1).

So, if by any chance photo raw engine would be accessed to compress LV raw it won't slow it down significantly and give an outstanding ability to open these raws in DPP, where color is so nice and demosaicing works so well. Limitless abilities.

This would be more than worthy of a good donation :)
#64
You know mates, what realy stumbles me.

With 5D3 we have 6fps at 22mp @ raw with lossless compression. 1920*1280 is 9 times smaller, so the same codec/routines should be able to pull up to (& who knows, may be beyond) 54fps, most likely 60fps, if you'd cut some fancy features. With 70% compression.

Canon definitely can pull, if it would care.

Is there any way to access RAW routines of original hardware/firmware?

Also, is the code multi-threaded? Apparently all these slowdowns with GUI/etc. is due to serial perferming?
#65
The image is misleading.

1) Nowadays DR of digital sensors is enormous, compared to what it used to be, and outperforms many, if not most of the film (per area unit). Some BW might be still better, when other properties (grain) are poor.
2) Both linear digital and enhanced digital should stretch from (0,0) to the same topmost rightmost point, it is an allocation of levels per bits what makes it different (linear vs. curved), not the clipping (defined at the moment of reading the sensor).

At this moment I would prioritize ability to compress RAW 14 bit in some lossless/lossy way, and then chopping some bits off and possibly applying LUT. IMHO, of course.
#66
Quote from: pascalc on May 27, 2013, 10:28:56 AM
Inside the USB portable package, the discs are SATA ones so this would ad a unneeded conversion : IDE -> USB -> SATA instead of IDE -> SATA. I don't know if it slows down the data transfer but it surely consume more power.
True that. But I consider workable portability (something like seagate wireless plus with built in rechargeable battery) as a much, much appreciated feature. I am not a pro with a fully grown rig, I need something easy to carry and easy to use. Those adapters shouldn't consume too much, I think, and will be powered up by a usb port. If it will ever work, of course.
#67
Okay, back to rounding.

Lets simplify it for understanding - will work it out in a decimal system first.

We have 1001 and 1005 and want to shift it to the right by 1 digit, divide by 10. The double result of division is 100.1 and 100.5, nearest integers after NORMAL math rounding are 100 and 101. However, if you want to operate in the integer space you will have 100 and... 100, since CPU rounds DOWN to next integer. To avoid this we want to add 5 (half of the decimal base in power of the shift) to get normal rounding: 1001+5=1006 >> 1 = 100, 1005+5=1010 >> 1 = 101. Now, if you wish, you can shift them back. I am not sure how you want to align the final result (left or right).

To shift by 2 bits, divide by 4, you need to add 2^2 / 2= 2, before you shift, to preserve correct mathematical rounding. Then, if you need to re-align most meaningful digit to the left, you can shift it back.

I hope it is clear now. But, look-up table might be indeed a better alternative, unless referencing memory via ptr is expensive, I am not familiar with efficiency of DIGIC5+.

BTW why it was so important to round it so well? Perhaps for 10 bits?
#68
Raw Video / RAW video vs. RAW photo noise
May 27, 2013, 10:13:54 AM
I have performed a number of tests, one of them is (video raw vs. photo raw) X (FF vs. 1:1 Crop ) X (ISO400 vs. ISO6400):

http://kartashovs.com/UpLoads/5D3_RAW_VIDEO/

Aforementioned pics start with CINE (from video raw, ACR) and PHOTO (from photo raw, DPP).

All video pics sharpened the same way, all photo pics sharpened the same way. No denoising was performed. Basic WB, and de-fringing was done on all pics.

Just like FF video raw is binned, FF photo raw was downsized 3X.

I have used 24-70 2.8 I, with 70/4 for FF and 24/4 for 1:1 crop. Obviously, it's not actually a 3x zoom, so I didn't get completely identical images. Chart readings in lp/ph are correct for FF at 70mm, shot at framing 1080p. I had to crop pics due to wierd windowing in 1:1 modes.

Results: software handling is vastly different. DPP wins hands down. Software supersampling is superb, much better than binning.

The GOOD: at FF reading, binning actually averages a number of pixels => noise (and thus dynamic range) is improved, compared to 1:1 crop, even if it isn't completely on par with downsampled photo raw. I.e. with 11-12 stops of dynamic range in 1:1, FF should give us 12+, though not as sharp.

Enjoy.

P.S. There are other pics of charts, 85L and video with the same lens.
#69
Hello team,

What about CF to USB3 conversion, instead of CF to SATA? Portable drives with USB3 have their own power out put, USB3 is fast enough (theoretically), and there are fully portable packages available.

I know, it might work in a wrong direction, but... something like this:

http://www.rakuten.com/prod/5gbps-usb-3-0-2-0-to-hard-dirve-hard-disk-drive-sata-ide-adapter/230514238.html?listingId=213389903

or this:

http://www.walmart.com/ip/19793234?wmlspartner=wlpa&adid=22222222227000000000&wl0=&wl1=g&wl2=&wl3=21486607510&wl4=&wl5=pla&veh=sem

Cheers
#70
Quote from: mucher on May 25, 2013, 11:51:47 AM
What I worried was that accuracy problem, using int might not be accurate enough, and if the data were too small, divide by (2^14) might make it unusable at all, so I change it to floating point, and, I worried that float might not be accurate enough too, so I wanted it to use double instead. Modern CPU might be fast enough to handle double, my wild guess. If not, you can still change to use float, that might be accurate enough already. But if I have got the full source code, I will definitely compile it into my way, before loading it to my camera, including meddling a bit with that raw.c too.

BTW, ARM9's 16 x 32bit MAD unit is in floating point to my understanding.

int x; // 14 bit number, in a 16/32 bit format
x = (x +2) / 4; // rounding closer to normal rounding, not CPU binary world.

// x is 12 bits now

2 => 1; 3 => 1; 4 => 1; 5 => 1; 6 => 2; 7 => 2; 8 => 2;

Double is 64 bit, and by now is totally fine on modern CPU. But for raw handling in our Canons we need more integer performance for packing all this raw data into something more space efficient. Floats, though, are nice to have at times too.
#71
Why don't you just divide by 4? Which would be similar to bit shift by 2 bits >>

Bit shift was handy since the 80286 era, when x*320 was = x <<  8 + x << 6. But since then mul and div became very cheap.

Granted, if your integer isn't right alinged (least mening bits) you have to bit shift more to the right, and then back to the left. Masking would do the job too, but would require either a register or mem op.

BTW, according to spec ARM9 has 16 32bit MAD units, should be more than capable of basic integer math, no?

EDIT:

I see, you probably are trying to get a better rounding. For that purpose you can ADD/SUB something, prior to dividing. But dividing and then multiplying a double number won't give you the result, it is caled a floating number, for a reason. There is a set of specialty rounding operations for doubles. However, I doubt double operations are fast on such hardware.
#72
Hi,

Not sure if i should post it here, but... would you please add 800/400 to vertical resolutions and 1728 to horizontal? 1920*800, 960*400 and 1728*720 have a perefect 2.40:1 ratio. Besides, all of them are perfect multiples of 16.

Your work is much appreciated!
#73
Hello team,

1) It would be super to keep naming of video RAW (and companion WAV) files following camera's convention (insanely great if it would include customized name) and then bump the file counter, so it would completely mimic jpg/cr2/mov file operation of the original firmware.

2) How realistic it is to get a feature of having automatically correct color profile during DNG conversion? I have 5D3 and color from ACR/DNG is pretty far from what I get from normal CR2 (I prefer DPP). I understand, I may touch it up, but every little work taken care of once will continuously save us time later.

3) Any chances FF raw will get sharpness of a crop raw, perhaps, somewhere in the future?

Thanks a lot for your great job!