Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deleted.account

#1
Would using a simple batch script combined with ffmpeg or txmuxer or similar to multiplex the camera audio with video stream help?

What's the benefit in recording seperate in camera audio? Rather than demuxing it later should you need to?
#2
Quote from: Thrash632

I've tested exporting the same dng sequence to ProRes 442HQ and 4444.

Quote from: Thrash632 on September 19, 2013, 07:37:51 PM
I didn't know to google chroma subsampling because I don't know what it is. That's why I'm asking here. Now that you brought it to my attention I will do some research on it.

So you want to export to format's you don't understand. A quick net search on prores or reading the documentation would have explained what the difference between 4:2:2 & 4:4:4 is and therefore what chroma subsampling is. Without someone having to write paragraphs explaining it.

QuoteI'm on a macbook pro circa-summer 2010. So these monitors are only 8bit? What's the point of having a 16bit and 32bit workspace in after effects if you can't view higher than 8bit?

When I grade the original dngs using ACR and bring up the saturation, the colors don't appear to blow out. So I'm confused as to how my monitor would be the problem.

So you're kinda doing it again, why do you expect anyone to explain about workspace bitdepth and doing processing at higher bitdepth? You have the documentation on Adobe AE or Resolve workspace bitdepth? Said enough for you to really look for yourself.

Quote from: maxotics on September 19, 2013, 09:23:26 PM
It's just as nauseating to listen to people on this forum tell others to "look it up on the Internet".  If you don't have something nice to say, why say it?  Either answer the question, or don't.

I'd rather say little than talk a lot and say very little ;-). Point in the right direction and they fill in the rest. But I don't see why I should have to defend myself to you.

Quote from: maxotics
What's upsetting to me Y3llow, is you do seem to have this knowledge and I found what you wrote VERY interesting.  I feel I've learned a bit more about this subject; however, not all of it.  That's the way with readings this stuff on the www.  I read it.  I still don't get how it applies to Magic Lantern RAW. 

For example, I want to convert TIFFs from RAW to a clip using ffmpeg.  I've searched everything I can find and I still don't know what CODEC or settings would give enough, but not too much, quality.

I like Cineform 422.  But I don't know what would be the closest in ProRes or H.264.  Again, I read, but I just don't know it as well as you.

The thread is about prores and the difference between chroma subsampling and how to export from Adobe products, not your problems.

QuoteFinally, if you're going to direct someone to the www, be specific.  Please point me to the www page where I can get my questions answer and not annoy you too ;)

Personally I'll put in the same amount as effort as the OP and no more.

QuoteHope you don't feel I'm attacking you.   What I'm really trying to say is I appreciate when someone like you responds.  No need to remind people like Thrash and me that we're dolts :)

You were attacking me and adding the self deprecating remark about dolts doesn't change that, thing is for those asking dumb ass questions, appreciate that people go to great efforts to produce effective search engines, documentation for products, free training material and wiki's etc, they do that to help, so why not use it?

Quote from: Thrash632 on September 19, 2013, 11:19:15 PM
Okay, so I just figured something out:
Could it have to do with a change in color space or something? I'm going to do some more research.

Monitor bit depth and gamut, is there a reason you want to push the saturation etc beyond the norm?
#3
Rather than get someone to explain chroma subsampling to you, perhaps first search the term on the www. Its expained so many times it becomes nausiating. :-)

Prores 4444 is completely and utterly pointless to convert to from camera raw, its 12bit because 10bit Y Cb Cr + alpha channel as there is no alpha channel in the source and its fricking easy to create one further down the chain perhaps save disk space.

Blow out, do they really blow out? Or just appear to on your 8bit display referred monitor via the rec709 / sRGB gamma curve? In reality in a 32bit float workspace they don't blow out but still appear to, in an 8 or 16bit workspace they would clip and in all case they appear to blow out because you are driving values beyond what a display can show.

But chroma subsampling has nothing to do with blow outs. :-)
#4
 Thinly disguised Nikon versus Canon BS about how Nikon will kick Canons ass now raw video is discovered, meh old news.

So here's my Nikon vs Canon trolling BS. Cinematographers and Nikon in the same sentance, really? As an aside the term DP gets so overused as well equating to 'the guy whos camera it is'. :-)

Registration distance has screwed Nikon for a long time and continues to do so, since Nikon have no video camera product line other than DSLR why don't Nikon provide camera raw video in forth coming product line?
#5
Depends what you're feeding in and compressing. The old adage garbage in garbage out. You need to test and decide for yourself if its worth doing it on your 60D.

Its one of those topics discussed repeatedly and a search of the web should give you plenty to read. :-) You could also search the web regarding compression to get the fundementals.
#6
Quote from: Rewind on September 10, 2013, 09:59:52 AM
Focusing pixels exists in red and blue channels. Green channel wisely stays untouched. So none of them actually pink. it is interpolation that makes them looks pink.

Interpolation? ok, so pink is not from dcraw's RGB multiplier scaling for white balance based on the clipped channel? Like if dcraw -H 1 is used for blown highlight handling and a channel is clipped then highlights discolor? Do pink dots appear with -H 0 and sensor saturation level having been set correctly for the particular camera, dcraw's -S CLI option.?

QuoteOverexposed in this case means clipped.
So full well? Clipped before demosacing and WB multipliers?

QuoteThis 'crap' happens when you shooting like the sun directly with dual ISO. And Im not sure these artifacts have the same nature as focusing pixels.

Is this different crap to the focus dots you mean here, not been funny, I just haven't read up on the pink dot thing.
#7
Quote from: Rewind on September 09, 2013, 08:29:16 AM
Since this crap happens in overexposed areas, it is possible to use any simplified interpolation allgorithm.
By the way, ACR does it for you. This problem shows up only in dcraw.

The crap being focus dots in pink or crap being the focus dots whatever color?

When you say 'overexposed' what do you mean? Clipped channel before white balance or blown pixels after white balance?

Does changing or setting the sensor saturation level for the camera make any difference or highlight recovery mode?
#8
Post-processing Workflow / Re: Upscaling Technique
September 09, 2013, 08:40:19 AM
Avisynth, Vapoursynth or Imagemagick. Any camera raw app that works in Lab colour space.
#9
Haven't read the thread and this may have been mentioned but if you're investigsting removing the dots before demosaicing or as part of a demosaicing algo why not contact the dcraw dev David Coffin or libraw developer at www.libraw.org or the DCB or Amaze dev it's all open source code and see if they can help?

You're more likely to get a solution there then via commercial software and DCB or Amaze in dcraw or dcraw variation as in libraw is no bad thing. Maybe Elle at Nine Below Zero Photography with her dcraw variation might be an another dev route.
#10
Tutorials and Creative Uses / Re: HDR
September 03, 2013, 11:58:59 PM
No problem. :-)
#11
Tutorials and Creative Uses / Re: HDR
September 03, 2013, 11:49:31 PM
What are you clicking on, the Interframe script? "user friendly version" or "bare bones", are you on a mac? If using the "user friendly version" try doing RMB Save Link As or whatever it's called in your browser and save the .zip file to your local drive then go from there,
#12
Crash with Buffer Overflow on Linux Ubuntu 13.04 64bit:

Processing Frames...

1/469
*** buffer overflow detected ***: ./unknown terminated
======= Backtrace: =========
/lib/i386-linux-gnu/libc.so.6(__fortify_fail+0x63)[0xf73ddbc3]
/lib/i386-linux-gnu/libc.so.6(+0x10593a)[0xf73dc93a].............. etc etc etc
f7755000-f775e000 r-xp 00000000 08:01 17957736                           /usr/lib/i386-linux-gnu/libXcursor.so.1.0.2Aborted
#13
Raw Video / Re: Working space in After Effects
August 28, 2013, 08:48:46 AM
Hi, I'm new to raw video and with compressed video such as h264 out of a Canon for example I'm really happy with the appearance of it using Vision Color or Cinelook non grading picture styles, I like the combination with my old manual lenses and the idea of straight out of the camera, so I do very little grading plus I no time to devote to it.:-)

However I have used AE to Resolve for DPX simply because QT decompression in Resolve constantly crashed with the files I was feeding it so just gave upnon it.

I think though with raw, really want as few steps, as little manual intervention and minimum intermediate storage cost as possible to get what we want from raw to a 'usable' format, so I'd not use AE for the raw to intermediate stage at all.

Instead if going to Resolve due to its 'bad' Canon raw handling currently go from raw to a flat output in a flavor of 10bit 4:4:4 or if really need be 16bit tif (if Resolve can handle it) or 10bit log DPX using a batch script, dcraw and imagemagick. Like via tin2tins eyeframe.

Just using dcraw to get a flat linear RGB output to bake WB but with no camera curve applied. Rather than via ACR flattening to combat the camera curve I assume ACR adds before export, but I doon't know, never use it. :-)
#14
Raw Video / Re: Working space in After Effects
August 26, 2013, 03:04:37 PM
Quote from: andyshon on August 26, 2013, 12:49:58 PM
Oops. big plate of humble pie for me. This is not working how I thought it was, how i was told it would, and I'd not checked it properly divy that I am. Thanks for the warning, very timely as it happens. I usually use essentially this workflow but exporting to DNxHD, which did seem to work. I could pull real data back into highlits or shadows in Resolve.

Not to stray into the minefield of encoding and decompressing video and 'limited' vs ' full range' handling by applications but if you used a rec709 (16-235) workspace in AE to DNxHD then there'd be no supers. If you used the non limited rec709 space you'd very possibly get a DNxHD encode with full range levels and therefore supers. But you can have a situation where a receiving codec like QT may scale levels say 16 - 235 to 0 - 255 giving you a brighter washed out appearance lifting shadows and pulling down highlights or a codec crushing the dark supers to RGB 0 (black) and compressing the highlight supers to RGB 255 (white) and stretching the levels out in between. Possibly giving the appearance of more latitude to play with.

QuoteThe DPXs definitly don't work like this though. Is there a way to get a proper broadcast levels signal into a DPX from AE, or am I flogging a dead horse?

I'd have thought your rec709 (16-235) work space dictated proper broadcast equivalent levels in DPX output. Are you not seeing that then, the part I questioned previously was outputting limited range into DPX full range, seemed like levels scaling might occur?

However is rec709 (16-235) really what you want to develop your raws into though, yes it's your final output space but you're intermediate is DPX and your outputting through Resolve?

Have you tried a wider working space in AE, exporting as full range DPX and importing into Resolve as full range, monitoring in Resolve via a rec709 display lut / calibrated monitor and encoding to rec709 video formats from there? You'll be monitoring / previewing as rec709 (16-235) so things may appear to be clipping and crushing in preview although your scopes should tell you otherwise as the 32bit work space should ensure no clipping whilst you grade and making your choices on how you compress your DR into rec709 for final encode?

QuoteCanon raw files can contain upto 11 stops, and with some cross channel highlight reconstruction you may get a little more.

Well I was being conservative when I said hold more than 9 'stops', linear encoded 16bit tif output will hold theoretically 16 stops with decent gradation which is what really matters rather than theoretical 'stops', 10bit log I'd assume much the same but it's subjective really, first how many stops are really usable re: noise, shooting ISO will affect that, then with regard to mixing from other channels that's a bit subjective too depending on the scene DR, exposure choices and color of light source(s), which channel clips first and how quickly followed by the increased noise from the multiplied weaker channels to get the white balance 'accurate'.

QuoteIn my experience a default conversion via ARC to a standard RGB space will clip some of this. And likewise if you grade for output in ARC you are likely to clip some data. But then I much prefer to do this rather than do a low contrast conversion at this point. This makes sense for us as our stuff often goes straight off for stock from AE.

There's no such thing as 'standard' RGB space :-) and it can be unbounded creating values massively bigger and smaller values than 0.0 - 1.0 display space without clipping, take ACES for example, but Canon raw is neither HDR or ACES. :-) Going back to the 32bit work space, we preview in display space 0.0 - 1.0 so we see clipping 'by eye', visually because our displays can't handle the values but the scopes will show no clipping, so it can be misleading to think data is getting clipped when in reality you can store a wide dynamic range in an intermediate file format and in a 32bit workspace but have to make choices on what to display in a limited DR output like rec709 (16-235) generally at 8bit on 6 or 8bit monitors with typically poor calibration :-)
#15
Never used it but you could try removing spaces from your folder and file paths, maybe use an underscore instead to signify a space. Spaces in paths can trip up apps working on the CLI.
#16
Raw Video / Re: Working space in After Effects
August 25, 2013, 05:27:58 PM
Quote from: andyshon on August 25, 2013, 04:16:38 PM
If I use the full range space then whites/blacks are clipped in the DPX.

Yes, agreed because 16-235, 240 chroma is 0-255 RGB, 0-255 YCbCr doesn't fit unless converted to RGB using a slightly different calculation or a format like OpenEXR is used and a 32bit work space chosen so you can have levels below 0.0 and highlights greater than 1.0 but then apps like Resolve can't handle EXR but AE can. DPX is an odd choice though unless the recieving app requires it. For AE 16bit half float EXR is a better choice if your talking about holding onto supers.

QuoteIf I use 16-235 then super whites are retained, if they were present in the raw file.

No they don't because supers don't exist in 16-235 they live in 0-15 & 236 to 255 YCbCr. :-) There are no supers going from raw unless you specifically scale them that way into 0-255 YCbCr  and then trying to set black RGB at 16 YCC and white at 235 YCC to 255 RGB then you'll get supers but at that point you've scaled luma and chroma back and forth. :-)

raw camera space to RGB is not HDR and a typical 16bit integer pixel format or 10bit log are more than sufficient to hold more than 9 stops of camera raw? Where as your rec709 limited range by specification is typically 5 stops. So it appears you're restricting your 'latitude' and normalizing it into a limited range work space, then later dropping it into a full range cineon space to offer chance for wiggle room, seems like AEs DPX options are the problem, Resolve takes full range DPX? But AE doesn't offer it in its DPX?

QuoteAnd I don't see where we're going back and forward? We go from the sensor space straight to rec709 16-235, and stay there. Setting Cineon to full range means it passes the working space unchanged. So our DPX has rec709 colours and gamma with black at 64 and white at 960.

Well there in lies the confusion and why I mentioned scaling of levels, you say you put rec709 into full range cineon, ie: 0 - 1023 and I said then IF those levels are scaled from 16-235 ie: 64-960 to full range 0-1023 then thats not best practice but you conclude that your levels don't scale, fair enough I did say IF and possibly. :-)
#17
Raw Video / Re: Working space in After Effects
August 25, 2013, 12:40:31 PM
If it works for you then it is all that matters, although I think you may be stretching you levels back an forth between full and broadcast possibly and that is also inferred by your mention of super blacks and whites, there won't be any in raw as its not HDR and rec709 limited range workspace will restrict the possibility also, full range rec709 workspace would allow supers, going to full range cineon if levels scaling also occurs then you've stretched levels out.

But you say it works for you, so that's good.

#18
Raw Video / Re: Working space in After Effects
August 25, 2013, 09:39:56 AM
I see now, thanks. 16bit profile is ARGB not 4:4:4:4 at all.
#19
Raw Video / Re: Working space in After Effects
August 25, 2013, 01:09:51 AM
I've never seen, heard or read of 16bit Prores, where have you found that?
#20
Raw Video / Re: Working space in After Effects
August 24, 2013, 07:53:21 PM
Quote from: ilia on August 24, 2013, 07:39:57 PM
from Apple white papers:
ProRes 4444:
The Apple
ProRes 4444 codec preserves motion image sequences
originating in either 4:4:4 RGB or 4:4:4 Y'CBCR
color spaces. With a remarkably low data
rate (as compared to uncompressed 4:4:4 HD), Apple
ProRes 4444 supports 12-bit pixel
depth with an optional, mathematically lossless alpha channel for true 4:4:4:4 support.
Apple ProRes 4444 preserves visual quality at the same high level as Apple
ProRes 422
(HQ), but for 4:4:4 image sources, which can carry the highest possible color detail.

I see, 12bit is for imagery + alpha, 10bit + alpha. But as the raw has no alpha then encoding to 12bit 4:4:4:4 seems a little pointless.
#21
Raw Video / Re: Working space in After Effects
August 24, 2013, 07:49:34 PM
Very little. :-)

sRGB is derived from the rec709 specification and is described as 'display referred' that is the output is targeted at sRGB monitors, mobile devices for example. http://en.wikipedia.org/wiki/SRGB

rec709  is described as 'scene referred' that is it's not targeted at a specific device but for all intents it is really, a HDTV for example. http://en.wikipedia.org/wiki/Rec._709

When we typically work on and view images & graphics it's generally assumed that to view 'correctly' they will be to sRGB. When we encode and view video's even if from sRGB imagery they are 'expected' to be rec709 specification. If we view rec709 video on an sRGB monitor it's expected that the rec709 to sRGB adjustment is done for us.

As an aside, there are other variants that use rec709 primaries which offer wider color gamut such as xvYCC (xvcolor) http://en.wikipedia.org/wiki/XvYCC and scRGB https://en.wikipedia.org/wiki/ScRGB just for completeness.
#22
But this uses DNG SDK for the very reason dcraw has problems with 600D & 650D as OP says?

But yes dcraw -4 -T exports 16bit linear tif
#23
Raw Video / Re: Working space in After Effects
August 24, 2013, 05:34:08 PM
Does Prores support 12bit, sorry I only know of 8 and 10bit. But yes, the higher bit depth intermediate gives you better grading advantages, ie: better gradations but when you either encode to 8bit or playback 10 or 12bit intermediates it gets the levels scaled into the lower bit depth and the result dithered to suit your display, which in most cases is either 6 or 8bit.
#24
Raw Video / Re: Working space in After Effects
August 24, 2013, 05:01:58 PM
Hi, sRGB imac will be display characteristics, basically a slightly different gamma curve at the base affecting the shadows area compared to rec709. sRGB and rec709 share same color primaries that define 'width' of gamut in the wider scheme of things.

Working space is to do with exactly that, the space in terms of gamut defined by the choice of color space, to manipulate your color values within, wide enough to generally avoid clipping gamut unnecessarily leading to such things as potential anomalies in final image 'quality.'

A wider gamut working space is more of a possibility when working with source files of greater bit depth than 8bit. But decision whether to bother also depends on final delivery ie: rec709 video.

The principle of color management in the case of apps like AE is that you define or in AE terms 'interpret as' your source files color space ie: rec709 for video, sRGB for images or camera raw spaces, set a working space gamut as wide as or a little wider than input sources or in the case of raw where color space is undefined and at greater than 8bit a choice of work space based on widest gamut output you intend going to, in your case rec709 in DNxHD.

Then depending on the width of the gamut of your work space you preview either as rec709 if viewing through a HDTV for example or sRGB monitors depending on your display device disregarding projectors and AE will do the necessary transform and dither from wider working space gamut and bit depth to display space so you see what you'll get in your 8 or 10bit video encode or close to it depending on a host of other factors concerning playback, codecs and the calibration minefield.

Regarding bit depth rec709 doesn't mean 8bit, the two are not related, yes rec709 defined video can be 8bit or 10 or 16bit in YCbCr.

If you encode raw 14bit into 16bit then you have 16bit levels range, whether that's necessary or depending on the source files bit depth worthwhile considering storage cost is another thing. 14bit raw precision into linear RGB minus noise floor setting black level around 4000 to 5000 and sensor saturation of first channel with the other channels scaled accordingly ie: RGB multipliers to get your WB right, will undoubtedly mean not even 14bits worth of levels. Black level (5000) will be set to 0 in 16bit and sensor saturated channel say approx 13584 for a 550D, scaled to 65535 in 16bit.

8bit from raw is not enough really for intermediate storage ie: DNxHD Prores etc, 10bit log probably would be, 16bit linear or gamma encoded more than enough.
#25
Buffer oveflows on every version to date on Linux, Ubuntu 64bit 13.04

App fires up ok, terminal shows the usual raw2DNG info about frame size, frame rate, number of frames etc and everything works in the GUI, like browsing to output folder etc, until I hit the PROCESS button, then crashes with either DNG or Tif output chosen.