Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deleted.account

#26
Quote from: glubber on August 24, 2013, 08:29:41 AM
Just a little hint if you want to use DCRAW.
The raw  converter Rawanizer has a built-in GUI for dcraw.

Yep and...

QuoteOnly works on windows

Yet it...

Quoteis using raw2dng, dcraw and ffmpeg.

Where as this 8 line script, like many scripts on the forums here is portable, doesn't need installing any component or libraries, uses the same three apps (raw2dng, dcraw and ffmpeg), can be tweaked and extended without a whole development / compiling / recompiling cycle and setup, using nonsense proprietary OS specific development environment & libraries for a freaking batch converter and the script is pretty universal apart from syntax:

Quotefor file in *.RAW ; do
mkdir './Out/'$file'/'
wine ./raw2dng.exe $file './Out/'$file'/'
mkdir './Out/'$file'/yuv16'
   for out in "./Out/$file/*.dng" ; do
      dcraw -c -a -H 1 -o 1 -q 3 $out | ffmpeg -f image2pipe -vcodec ppm -r 24000/1001 -s 1152x482 -i - -f mov -vcodec prores -profile:v 2 -pix_fmt yuv422p10le -r 24000/1001 "./Out/$file/yuv16/$file.mov"
   done
done

Of coarse it's not as pretty, or has a GUI and may cause mac users a bit of trouble, :-)jk and no disrespect to RAWanzier or it's dev, who is providing a free tool which works for many and built in an application environment he/she knows well.
#27
Quote from: wwjd on August 23, 2013, 08:08:23 PM
so... no disrespect intended... and I am new and don't know how things flow here... is it common for others to post their videos in the thread when you "Share your videos" as the forum is named?  just seemed odd... like they should make their own thread to share?

serious question, not trying to be a jerk or anything

Hey, I'll be that jerk. :-) No they should start their own thread and it's freaking annoying especially on mobile devices and lower bandwidth connections to have threads full of links to videos, especially when someone quotes a post and embeds the video over and over down the thread like everyone's forgotten what the OP's video was a few posts up, so they have to keep reminding us. Bl**dy noobs. :-)
#28
No, its a two step, raw2DNG app and then dcraw does DNG2tif but if done in a simple script you can delete the DNGs as tifs are created.

So yes the app does raw2tif great but at what cost to control. It crashes for me on Linux but worth testing output from dcraw versus the app.

If going to tiff or exr or prores etc I'd rather a two step with control over WB, sensor saturation point, channel clipping handling, demosaic algo, gamma or linear, black point, whether to normalize stretch 14bit levels into 16bit or not, highlight recovery stategy and possibly omit dcraw and go to tiff via ufraw with logish flat preset applied.

http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm

http://www.libraw.org/docs/Samples-LibRaw-eng.html

http://ninedegreesbelow.com/photography/dcraw-float-command-line-options.html
#29
The portable app is over the top for raw2tiff, dcraw will do it and with a lot more control. That app is really to cater for 600D & 650D's that dcraw supposedly doesn't like or if you want demosaiced raws that retain the ability to adjust WB etc ie: linear DNG for reasons such as Resolves crap demosaic of Canon raw.

Don't know what that app puts out in way of tiff but dcraw will give you 16bit open gamut linear RGB tiffs using any of a number of demosaicing algos, various color spaces gamma encoded if you wish.

libraw is another very useful raw library that emulates dcraw. dcraw-float another. Or ufraw on the CLI all depending what you want to bake into the tiffs.

A simple 5 line batch script will process a folder full of raws and give you tiffs and proxy DNxHD or Prores etc via ffmpeg

Don't get me wrong though the portable app like all the other DNG output batch converters have a use if you want to stay 'raw' or 'sorta raw' in the case of Force 'Linear' but for tiffs better options exist.
#30
hi 3point, I believe 5x zoom is 1:1 crop out of non line skipped source and non 5x is a crop out of 1920x1080 line skipped source but line skipped is still generally more detail, sharper and uncompressed compared to h264. 5x zoom you'll need to be rock steady.

As for false color and all that, so much is down to interpretation of the raw source via whatever raw development tool is used that its not necessarily in the camera raw source but via crap raw dev process.
#31
Quote from: aaphotog on August 20, 2013, 09:56:31 AM
Yes, for those who made copies, they could!
But many, don't make copies.

Duplication to 3 locations is a typical and well adopted strategy, patticularly for commercial work, duplicating files and working with a copy is always first op.

Quotethat's a copy of THREE seperate steps of the same files. Very large files at that.

Comes with the territory. Bit of a shock to those coming from a photography background maybe not only impact on storage but also even cost of basic video and audio related hardware outlays.

But you could compare storage usage between a Canon raw video project where recording times can be shorter due to card controller restrictions and trying to be efficient in what is captured versus a typical compressed video project where the camera can be left to run on or multiple shots of same scene resulting in what can be hours more video to trawl through.

QuoteI make copies of the DNGs, but don't make copies of the .raw container file.

FWIW I do the opposite, instead storing the raw and dumping the DNGs. The raws take less space than the extracted DNGs but more importantly if/when raw2DNG improves then DNGs can be extracted again if its worth it or needed.

But whatever, who cares, its not helping the OPs situation.
#32
Raw Video / Re: Working space in After Effects
August 20, 2013, 04:17:46 PM
It doesn't really matter for exporting to DNxHD but any upsampling, uprezzing, sharpening or blending would technically benefit. raws are linear.
#33
Can you not just copy the files back into your project folder on your machine from your RAID backup and your external hard drive where you copied your media cards contents to before starting work and use a different raw to DNG app?
#34
Raw Video / Re: Working space in After Effects
August 19, 2013, 07:18:03 PM
For me if going to resolve to grade it would be rec709 with a 'flat' or log type curve. But really just do your own tests to suit you and the way you work.
#35
Raw Video / Re: Working space in After Effects
August 19, 2013, 07:41:15 AM
There's nothing complex about it, the apps color management does it for us and the OP specifically asks what working space?

Setting working space to none is in effect switching color management off for the input source and just taking the camera raw generated RGB values through as is and encode out to rec709, fine. But I'd rather use CM and define the working space.

re full range YCbCr, yes and so do I when my intermediate files are YCbCr and scale at last op to proper rec709 levels at final encode as I'm sure you do. But if grading RGB intermediates I'll set a wider color space as working as many do for raw image development.

If you set none and if the app defaults to rec709 or sRGB as a result and do any color adjustments in ACR the gamut is being  restricted to rec709 even though you have the benefit of highr bit depth to support wider gamuts for color adjustments.

But if the OP is going straight to Prores with only the necessary black level, highlights and white balance then setting rec709 working space is enough, so seems little point running unnecesary risks by setting to 'none' and possibly getting color and tonal response mangled camera raw assumed as rec709 or sRGB going on.
#36
Raw Video Postprocessing / Re: Banding issues
August 18, 2013, 11:54:45 PM
The banding is vertical? Looks like the problem from early versions of raw2dng? Are you using the latest? Other than that maybe it's due to underexposing which looks to be the purpose of your test? Which really don't understand as camera raw / h264 isn't film to push and pull.

As long as you get your lighting ratios how you want the final image to look, then expose for good skin tone and darken shots in post?
#37
Raw Video / Re: Working space in After Effects
August 18, 2013, 11:36:43 PM
Quote from: reddeercity on August 18, 2013, 10:44:48 PM
Yes you are right !

I think "none" is your native color space in your Raw file.

Camera space, there's no real defined color space in raw. You transform Camera Space -> Intermediate Space -> Device Output Space. There's no doubt some design decisions made at sensor that make the output spaces limited to specific ones like sRGB and AdobeRGB.

QuoteI do the bulk of my WB, Color Correction with raw in Photoshop export as tiff 16bit (bake my Look)

i use the RGB wave monitor to clamp it down close to Rec 709(16-235)
I use as my main computer monitor , 32 inch LCD HDTV Sony Bravia Color Calibrated to Full Range
Rec 709 (0-255). So my color space is all ready in hd Color space.

'Full Range' RGB what's that? 0 - 255 8bit range, it makes no sense, YCbCr is described as being 'full range' at times ie: 0 - 255 or 1 - 254 in the case of xvYCC but that doesn't fit into 8bit RGB 0 - 255, or into 16bit Integer, so 32bit float RGB is required to hold and retain 'invalid' RGB values below 0.0 and greater than 1.0. There's none of those in Canon camera raw as it's not HDR but 'invalid' values can be generated in color manipulation.

QuoteAnd as i understand the color space Options ( i'm new at this with AE, I come from Autodesk Smoke)
you have 2 color space option to deal with.
That your input color space (DNG's/TIFF's) aka: working color space
and the output  monitoring Color space ( if just using main computer monitor or output Via
Capture Card to Grading monitor). Export Prores4444 32 bit Float.

i have a AJA Kona Lhi in MacPro to Panasonic Plasma THX Calibrated to Full Range RGB (0-255).
I have found if i change color spaces , my working  space & monitoring space do not match
but that in my Case only, so i guess it depends on your monitoring Equipment.
:)

Working space is not 'input' space, you tell your software what the source clips input space is ie: Interpret As, and you may have source files from various cameras, you set an overall working space to suit your output generally for the reasons above and here: http://www.cambridgeincolour.com/tutorials/color-spaces.htm & http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm & http://www.cambridgeincolour.com/tutorials/sRGB-AdobeRGB1998.htm, preview in sRGB generally unless you have wider choice of monitoring, using a color calibrated display device and if necessary use a view lut or ICC profile depending on what your color managed software uses under the hood to transform for viewing from wider working space to sRGB monitor space.

That's the whole point of color management in grading / editing / image processing software, to do all the transformations by lut or ICC from input through working and to output space.
#38
Raw Video / Re: Working space in After Effects
August 18, 2013, 11:33:53 PM
Quote from: ilia on August 18, 2013, 09:31:01 PM
Isn't rec709 for HDTV?  What color space is used if none is chosen?

rec709 is for YCbCr video formats including Prores and general imaging. If you're encoding to Prores 4:4:4 'YUV' or h264 or mpeg4 then it's rec709 primaries and transfer, color matrix may vary depending on resolution. If you're encoding to some RGB variant then it's rec709 primaries, transfer and probably 601 color matrix. Gamma curve and white points to suit rec709 video space or rec709 sRGB if image's or image sequences.

Work space color space is generally about what width of gamut your source file is opened into.

If you're opening an 8bit h264 video file into AE then rec709 or sRGB in the case of Canon / Nikon's / GH3 (MOV) JFIF video is the 'best' workspace 'gamut' for those files any wider and it'll almost certainly lead to color and gradation issues including banding.

As you're using raw then it has no 'colorspace', it's sensor data, that's how come you can set sRGB or AdobeRGB in camera or 'transform' from camera raw space 'open' 'wild' gamut to an intermediate space like XYZ and then to a device referenced output space and specification like sRGB or rec709 (16-235/240) or JFIF for video

But as said if you are going 'straight' to Prores with little color processing in ACR via AE then there's little point transforming from Canon camera space to anything wider than rec709.

Leaving it 'open' ie: camera space could lead to problems with translation of color gamut later when in FCPx, the whole point of color management and 32bit float processing is to make use of it. :-)
#39
Raw Video / Re: Working space in After Effects
August 18, 2013, 06:04:49 PM
ilia, if you're going to prores then rec709 work space would do, no point going to wider gamut or specifying 'none', undefined primaries the second option I'd consider more dangerous.

You could choose a wider gamut workspace if you wanted and were doing any substantial color processing in camera raw because you're source is higher bit depth than 8bit, wider gamuts are suitable.

@eyeland, Difference between 32bit and 16bit visually you're not going to tell as you're viewing sRGB / rec709 0.0 - 1.0 anyway, the 32bit benefit is hidden, ie: greater precision, no clamping / clipping to 0.0 - 1.0 display space if you push shadows and highlights (of coarse you need to pull them back in to 0 - 1, but nothing is lost going over or under whilst grading), linear domain operations IF you're scaling, sharpening or blending.  More control over shadows and highlights at 32bit with AE.

#40
Just a bit of a distraction from a nice short, but does anyone else see the heavy posterization in the brown shadows through out the video?

Areas it's really visible, to me anyway is the scene where she goes in / out her apartment door, the shadows in the darkened room far left, the hand in front the picture frame, the close up on the picture frame itself, the shadows in the girls hair in the photo in the picture frame, it's everywhere.

I'm watching on a calibrated display and I rarely see this sort of posterization. Thought my calibration was off, so checked the mp4 in Avisynth doing a proper conversion based on the levels in the video and what media info gave me such as the x264 encoder settings and the posterization looks as bad there and on another display, looking at avisynths waveform many shots have had the shadows lifted and then the shadow levels compressed, which makes me think that's why I see the problem.

I don't mean this as a criticism of the short.
#41
Quote from: Andy600 on August 14, 2013, 12:27:44 AM
I'm trying to get my head around how debayering works and specifically how line skipping in non-crop video might affect the bayer readout of the sensor? Is it different to crop mode 1:1 video?

I'm thinking about this because a cameras bayer pattern affects how certain debayering algorithms perform which is why something like Davinci Resolve debayers BMCC, ARRI and Red cameras well but sometimes struggles with ML video.

If the pattern is different for crop and non-crop how does the debayer know? Is it written into the DNG file?

I also noticed in Raw Therapee that AMaZE seems to perform best with non-crop video and DCB with crop.

Why not use dcraw and output Bayer raw images to see any differences between patterns?

re: Avisynth, yeah playing with it with 1200x496 T2i raw looking at upscaling methods and algo's on 16bit source.
#43
Andy best of luck, although I think you may be approaching it from the wrong end ie: image manipulation vs video. I'd not underestimate what's involved getting an app built for image processing at high precision to playback video or image sequences in realtime with color processing applied vs an app already built for video playback that needs ability to import raw and decode to RGB.

As mentioned the amount of opps that actually use raw data are a handful, majority of color processing, temporal NR and MC sharpening are not done on raw data.

The need is to decode raw into a 16 or 32bit processing chain.

Yes there are some nice Lab based tools in these raw editors which would be good to have in a video app but as said YCbCr color tools are available.
#44
Prores is available in some profiles via libav, ffmpeg, fmbc but more importantly what does your NLE or grading app except?
#45
Valid discussion, well talk is cheap as they say. :-)

I think first identify open source apps that form a good base like Blender with its 32bit float nodal OpenCL compositor, VSE and color managed (OpencolorIO) support for LUTs or Shotcut with its 16bit GLSL grading tools and basic in/out editing and playlist functions a bit like Bulletproof then approaching devs of those apps with structured ideas, mock ups, information and hopefully a couple of ML raw related Devs like you suggest with patches for raw support and EDL output for example and see what transpires. Blender devs would be complete dicks not to accept raw support in Blender.
#46
My last line was building ON an app like.... not build from scratch.  Discuss with the developers of those apps in existance, mock ups, info about ML raw. Darktable devs are considering support for video already. But again so little is on actual raw data, so raw editor is not necessarily the starting point. What about raw support in Blender for example, probably not a good example. :-) Shotcut maybe, that's cross platform.

A lot of start from scratch, reinventing whats already out there goes on including things like Openshot NLE a misguided attempt that didn't go towards solving the problem from twenty years ago in the days of Kino and now up for complete recode from crowd sourced finances.

Yes of coarse there are talented open sources coders out there but don't you think that if they were both talented and had endless time we'd have a decent open source NLE or grading app, that we'd have all heavy lifting libraries already.

Open source works generally by a developer needing to find a solution to a problem THEY have, then it might like minded developers join, it doesn't work by naive posts on forums from non coders. It would take more than one talented coder to build one aspect of an app like you mention in any reasonable amount of time.

And really I shouldn't have mentioned Linux as if it were the only open source route, I use many cross platform or Windows based open source apps. Just to avoid any misunderstanding.

#47
Andy you misunderstand me, I wasn't saying don't bother. I find it rather irritating to read naive calls to arms for open source software solutions from people who have little or no experience or knowledge of the history and immense effort to create such involved apps and think its so easy and a no brainer.

Fixing raw demosaic in Resolve is a small simple step especially for a large commercial enterprise. Creating an app as you describe is not like creating some dumb glorified script of a batch conversion app.

As some one who uses and has used open source software (Linux) for twenty plus years and not seen one decent NLE or grading app come in that time not to mention higher than 8bit precision color grading or even a GPU based motion analysis library for deflicker, denoise and motion compensated sharpening and such its a bit of a joke to here those recently aquainted in passing to the open source world who are not even on Linux to start talking of such big projects as easy and no brainers.

Much of what you mention are not processing done on raw at all but the RGB output from a raw decoder. Either consider building on an open source app like ufraw, rawtherapee etc or just badger the commercial enterprises to fix their shit you pay for. :-)
#48
Seems like a lot of effort and duplication for a handful of opps that have to be done from raw data to RGB.

Most stuff specific to raw development is done once and not a lot even under user control, apart from White Balance and personally I don't use that as a creative color control.

All other opps are done in RGB within any NLE or grading app and a decent one at 32bit float.

Admittedly the tools that work in Lab may be a bit lacking but then there's YCbCr at 32bit in an NLE or grading app as an alternative.

A small batch script using libraw or dcraw will give flat 'open' gamut 16bit output to tif, admittedly intermediate storage but at least quick and full control of raw development, very good debayer, no sharpening and no camera curve applied.
#49
You could just use a 5 line batch script with something like dcraw or libraw and export linear RGB open gamut 16bit tifs and be done with raw. Libraw offers excellent debayer, no sharpening, noise reduction if you wish and bad pixel masking.

What really do you gain putting raw straight into the grade? No intermediate file storage and control over WB, which everytime you adjust it debayer data is required again or from a cache to create the linear RGB which is what's actually used to grade with.

If your software is slow processing raw or doesn't do a good job, you could just use libraw to batch convert on the CLI using a high quality debayer, subtract noise, bake the WB and work with 16bit linear tiffs more than enough levels to contain the minimum processed raw as RGB and avoid all the repeated raw development steps?
#50
Quote from: iaremrsir on July 17, 2013, 08:17:35 AM
In Cineform Studio, set the output curve to Cineon 95-685. That will bring all details back. Also make sure that Sat Clip Point is set to .5 to avoid magenta highlights. If you want to work with the Protune curve you still can. In After Effects or Premiere, set the working space to 32bpc. In AE if you hover over the highlights, you should see that some of them go above 1.0, which means there is still detail there. Use something like HDR Highlight Compression or Levels to bring those details back. In Premiere make sure to use a 32bpc plugin to lower the highlights back down.

P.S. You can tell if it's raw by checking if there's an option for "Demosaic Type" in Studio. If there isn't one, then it's not raw.

If you map to Cineon 95-685 'restricted' 10bit levels range you shouldn't get values over 1.0 in a 32bit project, similarly no negative -0.0 either, so no need for HDR to LDR compression. That would be why you 'see the detail come back', it's within 0.0 - 1.0 display space, beyond those levels the data will appear clipped and crushed, doesn't mean it's lost though.

At 10bit you have 1024 levels available and with a 32bit workspace no clipping whatever the source levels range, you may not see your highlights or shadows on your monitor correctly, making you think they are clipped and crushed, until you pull them into the 0.0 - 1.0 display space range in the grade but they're still there unclipped ie: < 0.0 >1.0

If you however map to 0 - 1023 and make full use of 1024 levels you will go beyond 0.0 - 1.0 should you have such data in your raw again at 32bit the data won't get clipped, but to 'see the detail come back' you'll need to grade it into 0.0 - 1.0, so decision is transfer full range and grade later or squeeze into restricted range at conversion regardless of 32bit workspace. Same old 8bit video levels range restricted vs full, crushed and clipped etc etc.

Camera raw straight into 32bit workspace will not clip, so does LOG matter at all when working at 32bit? If going to 16bit or 10bit then yes.

Sorry iaremrsir if I misunderstand your comment.