GoPro CineForm Studio Premium/Pro Settings for 5D3 RAW Video

Started by Jake Segraves, May 17, 2013, 11:51:30 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

iaremrsir

Come to think of it, I don't think using it with CineForm would benefit us much, because we already have so much control over everything already in terms of the matrix and curves. But if you want to look more, here's a good explanation.

http://dcptool.sourceforge.net/DCP%20FIles.html

iaremrsir

Quote from: terranaut on January 19, 2014, 04:24:07 PM
are cf raw files supposed to allow the raw settings within resolve? for me they've always been disabled when importing cfraw files as either mov or avi into resolve.

my workflow for now is :
record as mlv (5dmkII) - convert to raw (mlvbrowsesharp) - convert to cfraw (raw2gpcf) - import to resolve

but the resolve raw panel is disabled, so i thought perhaps i could use gopro2pro perhaps to do the raw settings i cant use in resolve (temp, debayer, sharp), but any changes in gopro2pro arent reflected in resolve either. so if theres something i am missing to allow me to use cfraw files in resolve, and then use resolves raw panel, please let me know, thanks.

Sorry I completely missed this. Resolve doesn't give control over the raw parameters, it just debayers it and gives you an image. If you shoot a color checker or DSC one shot, you can balance all your footage much faster. You get CF compression w/ Resolve's debayer.

tonybeccar


1%

mlv + mlrawviewer out to cineform with amaze debayering in the middle which mlrawviewer already does. right now the pro res out there is kind of winning.

DANewman

Quote from: tonybeccar on February 07, 2014, 06:27:04 AM
Any thoughts on MLV support?? :)

First I've heard of it, I've been busy.  It is what should be supported next?  If so, please point be to the specifications. 


terranaut

thanks iaremrsir for your answering me, i appreciate it!
thanks also to david whos efforts to help aren't just seen in the magic lantern community but others as well, much on his own time and efforts, sort of like the john nack of video codecs.
thanks also to everyone else in here whos gone through remarkable efforts to code, test, and share. it seems like ML has increased canons dslrs literally tenfold and makes me wonder when canon will address magic lantern directly, acknowledging how impressive it is as an idea, reality, and community.

     it seems a given that MLV is replacing the RAW format, since its pluses like sound, metadata, etc are just as cdng was a step beyond dng. hopefully that new program to convert MLVs directly to DNGs kindly put out by tonybeccar will be able to create cineform files as well with some tweaks to the raw2gpcf code, this way even if magic lantern raw is never supported directly within the gopro program we can use another alternative with a gui, plus the possbility of syncing audio in that same cineform raw mov file.

     though i've done photo raws for over a decade, i am fairly new to video raw so i have some mental workflow hiccups. likewise, i have used cineform for 8 years but only on the compressed side. magic lantern has moved me over to video raw and grasping davinci resolve now, but i am hung up on some things i cannot seem to search an answer to that i 'get'. i swear i've tried! if anyone can answer one of these 3 graphs id appreciate it.

1 ----------
CINEFORM CODEC COMPRESSED vs CINEFORM CODEC RAW
i have a initial raw file from canon/ml, and i convert it to a cineform mov file with raw2gpcf, is the output file now -
     1- still a true raw file with the raw numbers losslessly compressed via cineform, and then put in a mov container (much like a photo raw file goes to a dng file)
     2- or is it now a true mov file but with no compression, and its just of such high bits and quality of lossless compression that it comes veeeery close to being akin to raw (sort of like turning a photo raw into a 16b tif using lzw comp)

2 ----------
DEBAYERING
     i am assuming that a RAW file stops becoming raw at the very moment in a program when those numbers are put through the debayer formula. i know when using lr/acr and photoshop, that debayer moment is when i hit OK in lr/acr and the raw file is debayered into a tif for further editing in photoshop.
if this is true, then the best adjustment ranges/quality happens within lightroom before debayering.
but for video raw files, i understand now that the moment of debayering doesnt happen AFTER resolves workflow, but instead within resolves just BEFORE resolves corrections workflow, so that resolve operates more or less like photoshop using tifs?
     if this is true, then even though i cannot do raw adjustments in resolve, i can do raw adjustments in gopro?
if true, then it would be that resolves non-raw correction algorithms are better then cineforms raw correction algorithms, so someone just uses cineform for the codec and skips on the gopro program adjustments?
     likewise, since resolve can natively read both the cineform raw and cineform nonraw codec(s), it skips cineforms own debayering and uses its own debayering filter, which is preferred anyways because davincis debayering is better quality than cineforms older debayering algorithms?
cineform has raw debayering quality options, which basically appears to us as a form of sharpening, and resolve either doesnt have debayering options or is its raw-panel 'sharpness' their version of debayering?

3 -
LOGS
     so raw video itself has no log curve, but we can assign one. whereas compressed video does not have a log curve but you can assign a simulated one just to try a squeeze some extra DR? for instance with h264 recording i was using technicolors profile in-camera and when importing into a program i would assign that profile to the video to give me a 'log-like' DR.
     when you convert a bare raw into a cineform raw, you assign a log, but that log curve can always be changed just as metadata until you debayer the video and then its baked?
is a video log simply implied numerics much like a 2d photoshop curve, or is it more that?
     i see assigned logs as plain numerics, ie 90 or 400, and then i also see sonys slog, cineons clog, resolves bmdfilm, and gopro protune. are companies logs just a certain single numeric on a 2d curve (like at cineons site it says Clog is 400), or are companies logs a series of numerics on a 2d curve and thats what makes them a bit more special than a single numeric point, much like a 1d vs 3d lut where the 3d ones have more specifics?

4 -
COLOR CHECKER
     i have an xrite colorchecker, and with photos i can use their lr plugin to get a custom colorprofile. i know the colorchecker can be used to manually set white balance, but is there a video plugin to use the whole colorchart automatically to color correct, not just white balance?


thanks for any input, as i have read up but a few things aren't completing circuits in my head. if i can nail down the raw workflow for video as i have for photos, why, my life would be complete. for the week.
gary g / wpb,fl

DANewman

Q1:  It will be CineForm compressed into an MOV or AVI.  You can choose to compress native RAW (CineForm  RAW) or developed (debayered) to RGB or YUV and compress that as CineForm.

Q2: There is no significant advantage to color correction upon RAW vs debayered to RGB, both have the same dynamic range.  However, texture and detail are controlled by the demoasic, so if you can choose that later, I would think that is a bonus.  Also RAW compressed is about 40% size or RGB compressed, with the same quality.

Q3: All compressed sources must have a log or gamma encoding, all compressor assume there is at least gamma correction (e.g. H.264, ProRES, etc.)  There a quality/efficient gains using log encoding.   Uncompressed RAW is linear, so you have to select a curve to compress it.  This is not metadata, as is does manipulate the data before it is stored.  The CineForm codec has some clever tricks to use metadata to describe decoding behavior, independently from the encoding curve applied.  Most log curves are 1D LUTs. GoPro Protune is log 113, SI-Log is Log 90, I like log curves that are mathematically described, rather than loading a table   
  output = log10(input * (base - 1.0) + 1.0)/log10(base)     // input and output ranges are normallized 0 to 1, base is the log base (e.g. 113)

Q4: I don't have that.


iaremrsir

Quick question Dave, how does the base of the logarithm connect to the dynamic range it's optimized for? Also why not log base 2 since every stop is a doubling of light?

DANewman

I thought something like that at first, wouldn't you use a curve that divided the stops equally amongst the available codewords: 10bit - 1024 codes, 12-bit - 4096 codes etc.?  So for a 12-stop camera, using 12-bit log storage you would think you need 341-ish codes per stop (4096 / 12 = 341.33.)  The source is in linear space is digitized, so say 14-bit (with noise in the lower bits), which has a highlight stop of 8192 values, which will be stored as 341 values in log space (efficient) -- this is fine -- whereas the bottom usable stop (for a 12-stop camera) is the lowest 3 bits, 8 values of mostly noise expanded out to 341 codewords.   That is not going to work.  It also plays havoc on compression efficiency as the last stop is mostly noise, and noise is uncompress-able.

Why do want to reduce the number codewords used for the highlight, but not give equal weight to the shadows, as the shadows are not all signal.  We want to store the signal without too much overhead of the noise. For wider dynamic range systems there is a lower noise floor, so we need to use fewer bits in the highlights saving more codes for the lower stops. That is why there are different log curves.


iaremrsir

Quote from: DANewman on February 12, 2014, 10:35:32 PM
I thought something like that at first, wouldn't you use a curve that divided the stops equally amongst the available codewords: 10bit - 1024 codes, 12-bit - 4096 codes etc.?  So for a 12-stop camera, using 12-bit log storage you would think you need 341-ish codes per stop (4096 / 12 = 341.33.)  The source is in linear space is digitized, so say 14-bit (with noise in the lower bits), which has a highlight stop of 8192 values, which will be stored as 341 values in log space (efficient) -- this is fine -- whereas the bottom usable stop (for a 12-stop camera) is the lowest 3 bits, 8 values of mostly noise expanded out to 341 codewords.   That is not going to work.  It also plays havoc on compression efficiency as the last stop is mostly noise, and noise is uncompress-able.

Why do want to reduce the number codewords used for the highlight, but not give equal weight to the shadows, as the shadows are not all signal.  We want to store the signal without too much overhead of the noise. For wider dynamic range systems there is a lower noise floor, so we need to use fewer bits in the highlights saving more codes for the lower stops. That is why there are different log curves.



Thanks for the explanation! One thing I noticed, is that compared to something like Log-C or Cineon, the formula you gave clips the black at 0 if the blacks are in fact clipped in the raw data. And something like Cineon offsets it. What's the reasoning behind this the offset/raised blackpoint? How exactly would you read this formula if you were to graph it? I'm thinking that input would be x-axis and y-axis, but I don't know how to visualize that in 10-bit or 12-bit values. And also, how'd you come up with the base of the log, like the 113 and 90? I apologize for asking so many questions, I just want to understand this more. I want to do engineering that involves something with cameras and/or sound... Is there a name for that? Anyway, thanks again.

DANewman

The Cineon curve is a byproduct of film scanning and it relates to the nature of negative film.  The large lifted black is mostly a waste codewords (in my opinion) but it can help in film style workflows for those that still think that way.  The data that represents black is always lifted slightly as you do want to encode the noise (just not too gained up) not clip it.  The sensor data is often significantly lifted, so that DC offset is removed before applying the log curve.

Excel spread sheet with the formulas https://dl.dropboxusercontent.com/u/5056120/protuneVgamma.xls

iaremrsir

Thanks for explaining all of this. And thanks for the spreadsheet as well, gives much better understanding of how it works compared to video gamma.

zcream

@DANewman

Would you consider passing me the source of RAW2GPCF ?
This is exactly what I am trying to do with the API of Flea3.
http://personal-view.com/talks/discussion/8944/3d-3k-12-bit-raw-camera-project

I am stuck due to lack of documentation related to various fields in the CF header. As well as
the lack of knowledge of packed vs unpacked RAW inputs.

zcream

This would demonstrate the problem.
The Flea3 cameras save this raw file sequence as I chose a sequence of images.
rawcgpcf cannot open the raw files.

>>>
https://dl.dropboxusercontent.com/u/9906333/fc2_save_2014-02-19-184008-0000.raw

>>>>

C:\Users\Owner\Documents\RAW2GPCFv114>RAW2GPCF.exe e:\*.raw e:\*.avi
e:\*.raw file could not be opened

done


>>>>
C:\Users\Owner\Documents\RAW2GPCFv114>dir e:
Volume in drive E is Record
Volume Serial Number is 6EC8-D020

Directory of E:\

19/02/2014  06:40 PM         6,220,800 fc2_save_2014-02-19-184008-0000.raw
19/02/2014  06:40 PM         6,220,800 fc2_save_2014-02-19-184008-0001.raw
19/02/2014  06:40 PM         6,220,800 fc2_save_2014-02-19-184008-0002.raw
>>>


DANewman

zcream,

Use the CineForm codec SDK.  http://twitter.com/CineFormSDK
What are you referring to here? "lack of documentation related to various fields in the CF header."

Use BYR4 (16-bit linear format -- easy) re: "the lack of knowledge of packed vs unpacked RAW inputs."

David




zcream

Hi David. The sample code is DPX2CF.
For a fast realtime compression, I would need to access the isochronous threaded API.

In the example given, threading can be done by just passing different files to a different thread.

For a single file, I would assume threading would be done by breaking it into segments and then passing it into the encoder using the threaded encoder API.

As there is no example of threaded encoding within a single file, I have not been able to figure out these details.

I had stopped coding to this API last month, and did not keep notes. There were a couple of fields that were not very clear. I will post them once I start to look at the code again.

zcream

My raw data is packed 12-bit data.
So we are looking at 2 pixels in 3 bytes. Would I need to unpack it to a 16-bit RAW file with unused bits at zero ?

If I am planning to do this in realtime, this will involve a penalty hit.

Quote from: DANewman on February 26, 2014, 07:18:10 PM
zcream,

Use the CineForm codec SDK.  http://twitter.com/CineFormSDK
What are you referring to here? "lack of documentation related to various fields in the CF header."

Use BYR4 (16-bit linear format -- easy) re: "the lack of knowledge of packed vs unpacked RAW inputs."

David

DANewman

The codec uses 16-bit SSE2,  so the unpacking has to occur somewhere.  This might be 5-10‰ of the compute load.

zcream

Quote from: DANewman on March 03, 2014, 03:50:36 PM
The codec uses 16-bit SSE2,  so the unpacking has to occur somewhere.  This might be 5-10‰ of the compute load.

Does CF only accept 16-bit or 8-bit ?

DANewman

Quote from: zcream on March 04, 2014, 04:23:50 PM
Does CF only accept 16-bit or 8-bit ?

For RAW only 16-bit should be used.   For YUV v210 is a 10-bit format, and there are several 10-bit RGB pixel formats.

zcream

I've done the conversion to 16-bit and am writing out a workable dng header.

Is there a preference for the Asynchronous vs Encoder route ?
I think only the Asynch offers multithreading, but don't see how to split up a file into chunks for single file mt (the example uses multiple files).

I would be encoding in realtime.

Also, can you deal with dark frame and bad pixel issues ?
Do you need the XYZ-RGB matrix specific to the camera passed to CF ?

Its quite exciting, if I can make this work - the flea3 camera offers 4K in 8-bit RAW for 700 bucks.

iaremrsir

For sources that have north of 13 stops of dynamic range, should we just use the 13-stop curve?

zcream

Is this a curve used to map 12-bits linear RAW to 10-bits log ?

Can I get a reference or a citation ?

EDIT:- The Internet is a wonderful resource.
I read up on Log mapping and LUTs. These are called Linearization tables in the DNG spec.

David, I am just trying to figure out why my LoadLibrary called to the CFHDEncoder dll are failing.
I have a license - do I need to supply it as a string to the LoadLibrary call ?
And would this be the Activation code ?

ATM, I am not passing a license.


DANewman

Quote from: iaremrsir on March 06, 2014, 07:43:09 PM
For sources that have north of 13 stops of dynamic range, should we just use the 13-stop curve?

That works, as do the other Protune curves, the 13-stop curve place a little more emphasis on the shadows.