Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deleted.account

#51
Regarding improving the workflow, it is possible if you wanted to test any improvements, to import 16bit tifs into Avisynth, upscale at 16bit depth and then encode out to 8bit 4:2:0, whether via Virtualdub, AVSPmod or even on the command line with avs enabled build of ffmpeg (MS Windows only).
#52
Personally I use temporal denoising in 4:4:4 or 4:2:2 using a temporal denoiser comparing noise profile over multiple frames  like Neat Video and Dark Energy. Wouldn't even consider NR in ACR for raw video.
#53
ffmpeg provides free 16bit RGB and YCbCr video codecs including 4:4:4.

dng to exr is trivial. raw2dng -> dng via dcraw piped to imagemagick hdri.

If it were possible to pipe straight from raw2dng then wouldn't have to create dngs at all going to tif, exr or 16bit video codecs etc.
#54
Quote from: romeus on July 09, 2013, 02:52:34 AM
There is reduction in contrast and color shift
it's caused by quicktime which converts the luma from  (0-255) to (16-235) and the chrominance to (16-240).now you have a broadcasting color.Prores is Rec.709 color space
try to use x264 encoder it allow to you to select (0-255) full range
http://www003.upp.so-net.ne.jp/mycometg3/

If you go RGB to YCbCr then it should do that mapping, full range in x264 is a bad option unless you can be sure of whole processing back to RGB.

As mentioned the OPs problem is due to wrong gamma and/ or color space in ACR output or using Quicktime on older mac OS than Mountain Lion.

Tiffs can be many color spaces not just RGB so output from ACR needs to be sRGB color space and gamma.

I believe Mountain Lion + latest QT fixes the qt gamma shift issue discussed all over the web.

Can you post a sample tiff and same prores output?
#55
Raw Video / Re: HDR from a single RAW VIDEO!
July 04, 2013, 03:04:49 PM
Along with tone mapping, there is also Luminosity Painting / Contrast Masks / Saturation Masks such as:

http://goodlight.us/writing/luminositymasks/luminositymasks-1.html

Specific to image manipulation but also achievable with travel / track mattes / luma masks in grading software for video.
#56
Image preview will assume sRGB gamma of approx 2.2 and images really should be 'handled', processed and stored in that way, rather than rec709 display profile which is video space realm unless working in a color managed chain ie: as photoshop reference above. But it can lead to confusion.

Throwing rec709 gamma'd imagery into the mix and encoder can lead to mishandling and result in off colors and contrast.

There's no issue with ffmpeg's Prores that I have seen, ffmpeg uses BT601 luma coefficients for prores not rec709 if that assignment gets lost in processing or flagged incorrectly at encode and actually acted on in the player, re: MPC mention above then color and contrast will be off.

As avisynth is not interested or color managed we need to take care what we put in is same as we flag at encode. ffmpeg prores to x264 I'd suggest transferring the matrix in avisynth and see if that helps.

Or if encoding to x264 without use of avisynth ensure correct input 'color space' info is chosen.

Personally I wouldn't judge or assume anything Apple QT based products produce as being any way correct, especially if on pre Mountain Lion. Color management at OS level is dodgy prior to that. :-)

And be careful when dealing with processing in the chain that isn't color managed or ignores color matrix declarations and instead bases that choice on the fallback of color matrix by resolution. ie: BT601 being assumed for below HD res ie: 1280x720P and BT709 for HD resolutions. For example handling 550D / T2i 1152x482 image frames might result in an encoder / player assuming BT601 color matrix to the processing.
#57
Raw Video / Re: 5D3 Raw 4:2:2 or 4:4:4?
June 28, 2013, 12:17:13 AM
There's an abundance of info on the web about camera raw. Best search and be enlightened. Suffice to say camera raw is not 3 channels at 14bit color depth.
#58
Raw Video / Re: 5D3 Raw 4:2:2 or 4:4:4?
June 24, 2013, 11:58:18 PM
ffmpeg provides 16bit RGB and 4:4:4 rawvideo YCC.  Described as rgb48 and rgb48le, yuv444p16 & yuv444p16le, (le) depending whether LSB is first or not. Whatever color space gamut preferred, linear or gamma encoded. A bit academic really as it's what the grading app or NLE supports.

My understanding is that raw is not even in a color space, nor is it 14bit per channel when talking about say 10bit YCbCr Prores or 16bit per channel RGB48.  4:4:4:4 is with alpha, 4:4:4 is not.

When the raw data is 'developed' into more 'usable' RGB data it can be matrixed into a color space like sRGB, AdobeRGB, Prophoto, XYZ etc either baked or interpreted in a color managed raw handling app, then subsequent color processing would be done on RGB data not raw.

Only a small proportion of the operations done in creating a 'grade' or 'look' are actually done on raw data ie: raw to RGB at best bit depth the app can muster. From then on most opps are done on RGB data preferably in Lab.

Benefit on importing raw over 16bpc is control over WB, debayer algo etc and avoiding all the intermediate storage.

**EDIT** Posted same time as pavelpp. Answer, application support. Only know of Avisynth that will handle those ffmpeg formats and plugin support limited. Dither tools being one.
#59
What ops did you do in ACR or was it simply to get 16bit tiffs.

Just wondering as dcraw will go .dng to 16bit linear RGB tiff's ( or sRGB, AdobeRGB) does bake either auto or camera white balance, but all with no user intervention required. Gives a flat, linear domain 16bit image.
#60
dougie, thanks, I'd not tried that.
#61
Quote from: Chagalj on June 22, 2013, 06:58:33 PM
1280x512  2.5:1 ratio 23,976 fps ----------   310 frames!!! This is usable!!!
Sandisk Pro 95MB/s

My thoughts exactly, out with cam today and getting 360 - 390 frames each time, same settings and card as your's. Rewinds recent build.

At those frame numbers I found still needed to be careful when to start and sometimes missed the moment, drop out ending premature, then had to wait for whatever the bar thingy is at the top of the LV to write to the card before starting next shot, so thinking better to maybe switch off stop at skip and just skip frames to keep some sort of continuity, if this is at all possible with latest developments? Assume the log file logs dropped frames for help at edit time?

Only other thing I haven't sorted yet is having got into the habit of 5x & 10x zoom for focus help when not using raw, now find 10x crashes the camera, obviously well known, but how to get out of 5x once in it? Pressing the minus zoom button doesn't seem to do anything once in 5x.
#62
@fatpig is this post before or after A1ex mentioned about altering exif data? But you could try dcraw if no one has a better idea.
#63
Shield, yeah that comment made on EOSHD you refer to gave me exactly the same response and if you looked at some sort of timeline of events its not hard to imagine that there was a lot of activity here from followers of the initial raw histogram development that lead to chance of raw at all, raw yuv as alternative and the final break through for raw recording all happened months before with ML members testing and giving feedback.

BUT Andrew is not that fanboy poster and he has put effort into compiling a great deal of disparet info scattered over this site that makes it far less frustrating trying to find stuff, especially for those adopting / investigating raw now.
#64
Raw Video Postprocessing / Re: raw2exr in Linux
June 17, 2013, 10:46:39 PM
houz, thanks for the extra info and 16bit route from darktable-cli.

Regarding a raw sample, I guess you're looking for a better one than I emailed to you a few weeks ago? I know it was poor and only off a 550D at 960x408, so totally understanding you looking for a better sample. ;-)

QuoteBefore you try to support ml-raw-video in darktable, please talk to Alex about his plans with the raw-video format.

Sure but there's a MASSIVE difference between being able to open a ML Raw file and
Quotesupporting video workflows better.
as I'm sure houz is fully aware, from loading large image sequences, been able to play through an image sequence, even trim in/out's, interpolated keyframed effects and mattes / blending, ffmpeg export even, not suggesting these are the scope of what the DT devs are considering but still massive difference and it's not necessarily only raw format support, color managed, 16bit OpenCL processing in Lab with more photo image orientated processing tools can be very useful to non raw video formats too.
#65
Raw Video Postprocessing / Re: raw2exr in Linux
June 15, 2013, 04:52:20 PM
Quote from: escho on June 15, 2013, 04:46:47 PM
Why do you use raw2dng.exe in Linux?  I work with raw2dng without exe and without wine.  ;)

Edgar

Simple, I haven't found a Linux binary, I haven't been interested to try compiling it myself, raw2dng like many windows binaries run just as fast under wine, for the purpose of this it does the job. :-) I'm sure I can think of a few more reasons. :-) But if you have a linux binary or link to that you'd like to share ;-) and it works on Ubuntu 13.04 then great.
#66
Raw Video Postprocessing / Re: raw2exr in Linux
June 15, 2013, 04:49:17 PM
You're absolutely right, just like the UFRaw workflow listed in the output of the raw2dng converter.

After trying out DT on video, the devs suggested I looked at the darktable-cli for a number of reasons, some relating to the GUI handling and workflow.

http://blendervse.wordpress.com/2013/03/21/darktable-for-video/

The only restriction on darktable-cli I think is it's limited to 8bpp output, so for exporting 16bit formats such as EXR, I don't think it will do it. I may have just been using it wrong though but I got an error message about DT's API.

#67
Raw Video Postprocessing / Re: raw2exr in Linux
June 15, 2013, 03:49:51 PM
Updated the shell script to handle all the raws in a folder.

Creates a folder for each raw using the raw file name -> extracts the dngs to that folder -> creates a sub folder called EXR -> dcraw + IM handles dng to 16bit exr as above. Option to set tmp folder, I use a separate drive for tmp to hold temporary pipe data.

export TMPDIR=/media/......
for file in *.RAW ; do
mkdir './Out/'$file'/'
wine ./raw2dng.exe $file './Out/'$file'/'
mkdir './Out/'$file'/EXR'
   for out in "./Out/$file/*.dng" ; do
      dcraw -c -w -H 1 -o 5 -q 3 -4 $out | convert - -depth 16 "./Out/$file/EXR/%06d.exr"
   done
done
#68
Anyone know if raw2dng can use standard output, or if not can the option be added a1ex?
#69
Raw Video Postprocessing / Re: raw2exr in Linux
June 13, 2013, 07:41:40 PM
I use DT both via the GUI or via the command line darktable-cli. Hundreds of frames rather than thousands though.

I've found DT a bit unstable for sure but am running PPA unstable and git builds. You could try using OpenCL in DT  if you're vid card is up to it, that may help.

Regarding floating point EXR's I think they're half float at best, via DT 16bit rather than 32bit full float not that it really matters much about having 32bit, massive files and little to benefit storing images that way for the most of us.

Imagemagick will also give you EXR output, you'll need to compile a hdri version possibly unless things have changed with IM.

Another option might be to use dcraw to go dng to 16bit tif, you'll have control over debayer algo, WB, sensor saturation level, color space but going to tif will bake WB but Blenders tools are totally lacking for raw anyway.

**EDIT**

Here's a simple script I just hacked for the CLI to create 16bit EXR's from the .dng's:

for file in *.dng ; do
dcraw -c -w -H 1 -o 5 -q 3 -4 $file | convert - -depth 16 "./EXR/$(basename "$file" .dng).exr"
done

If you put Raw2dng.exe in a folder along with your dng's, create a sub folder called 'EXR', save the above shell script above in there too, give it execute permissions, make sure you have dcraw and a hdri build of imagemagick installed, then fire up a terminal, cd to folder and start the shell script. You should get 16bit exr's in the EXR folder. :-) Of coarse you can use tif or png for 16bit too but blender will read the exr's as half float or float whatever.

If you import the exr's via the Movie Clip editor change the color space input to XYZ.

If you don't want XYZ color space the dcraw settings and explanation is here:

http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm

So you could change the dcraw settings to suit. I just did:

-c    send to standard out
-w   auto white balance
-h    '1' linear mode no clipped highlights
-o    '5' XYZ color space (0=none (no colour management), 1=sRGB, 2=AdobeRGB, 3=WideGamut, 4=ProPhoto, 5=XYZ)
-q    '3' Best quality Bayer demosaicing (0=bilinear, 1=VNG, 2=PPG, 3=AHD)
-4    Creates linear 16bit rather than 8bit gamma

If raw2dng would pipe to standard out we could avoid creating the dng's at all. Obviously a lot more could be added to the script like mkdir for each named RAW and sending the .dngs or EXR's to those named folders etc but meh...



#70
You don't want to map to 16-235 in RGB has the mapping is 0-255 RGB to 16-235 YCC and that's handled by the encoder, default will be that mapping.

OP what are you viewing your final movie in and extracting images from and what resolution? Resolution can affect whether BT601 or BT709 is assumed and used when encoding. If you don't specify primaries and luma coeffs it will pick them based on resolution.

What version of Photoshop and ACR is it properly color managed? Likewise what version of QT and OS? To me it looks like typical QT gamma shift with pre Mountain Lion OS and earlier version of QT plus wrong color matrix. Also as ACR is involved maybe it Prophoto assumed rec709.

**EDIT** Oops looks like this has been said already.:-)

I actually thought Prores was BT 601 not BT Rec709 with regard to luma coeffs?
#71
Yes this 5x and 10x references and 1:1 crop have been bugging me too. :-)

With the 550D raw builds in video mode 1920x1080, it appears there is no option to disable 5x and 10x, it's one or the other or am I looking in the wrong place?

I've tried disabling the multiplications before enabling raw video, after, before loading modules, after but non seem to disable the 5x and 10x zoom.

So where is the 1:1 crop no zoom? :-)
#72
re Ugly way to work, I used to use Vdub and Avisynth but now prefer AVSPMod + Avisynth.

Here's some posts off my blog that may help you, about using Canon MOV's with Avisynth, Imagemagick and getting 16bit image sequences out of the process, something VDub can't do.

From two years ago, so plugin links may have expired or plugins improved:

http://blendervse.wordpress.com/2011/12/24/canon-magic-lantern-hdr-feature-to-10bit-lossless-h264/

http://blendervse.wordpress.com/2011/09/16/8bit-video-to-16bit-scene-referred-linear-exrs/

http://blendervse.wordpress.com/2013/03/12/pipe-to-rgb-avspmod-to-imagemagick/

http://blendervse.wordpress.com/2013/03/23/pipe-to-rgb-quick-how-to/
#73
Post-processing Workflow / Re: Best end quality...
June 08, 2013, 09:09:22 AM
720P mode is horrible, why throw more away at the start? Personally I use 1080P and resize to 720P at the encoder.
#74
Ok, so I love Avisynth and use it a lot, including the LSB/MSB stacked for 16bit. But got to say I think that it's the wrong tool for the start of a workflow better for the latter stages, like temporal noise reduction, luma sharpening, upscaling to 720P and 1080P, interframe and motion blur etc in YV24 and YV12 then going to final encode.

There is a debayer plugin for Avisynth, doesn't appear to offer best quality, http://forum.doom9.org/showthread.php?t=132238 and the sashmi plugin I think can output stacked LSB/MSB for 16bit processing in avisynth.

But unless using dither tools for 16bit stacked LSB/MSB, there are many filters that don't work in RGB or limited number at YV24, are only 8bit processing so you'll have to keep jumping in and out of 8bit to stacked 16bit to 8bit to use certain filters.

Processing is very slow, very little is GPU assisted and 16bit LSB/MSB processing can be slow and unstable.

I've already looked at this a bit, I know I lot of what I'm saying is negative and sure probably some use for Avisynth from the outset rather than latter stages but feel avisynth is a predominantly YCC tool and filters with little support for 32bit float linear processing available in other free tools that can be used on the CLI for batching like UFRaw, Rawtherapee and Darktable.

Looking forward to you pursuing this, as when I looked at it a few weeks ago I basically thought there's better to ways to do the raw development and instead feed raw developed image sequences using Darktable into Avisynth for the reasons mentioned above to process before encoding.

Good Luck. :-)
#75
The color space conversions from ffmpegsource2 YV12 to RGB then conversion back to YV12 is not lossless and totally pointless. You could remove it and save a bit of quality and speed.