Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - TKez

#26
Feature is there for standard video recording, but can't seem to do it for RAW.

On the cameras with limited SD bandwidth, in my use case I'm often just reading to continually restart, gaps aren't that important.

Any way? LUA maybe?
#27
Quote from: Teamsleepkid on May 05, 2017, 06:58:58 PM
Just give up.. I used to run around in circles trying to get rid of moire. You can't not without losing tons of detail and looking like SD footage. Maybe the mosaic engineering vaf? Seems like the only real solution.

Yeah I've come to that conclusion (though I've found the ACR defringing tools can work wonders in for the coloured variety, even though its not designed for this)

Well I'm on 70D, VAF's ludicrous pricing with intl. shipping makes it almost the same as trading the 70D for a 5D3. I'm amazed anyone buys them.
#28
Testing different ways to deal with Moire and being that blurring and upsampling are visually similar, am I giving up one of the reasons for shooting raw in the first place (444, I know the bit depth is still there).

#29
Quote from: TechnoPilot on April 30, 2017, 10:10:49 PM
Alright it is important to note the difference, log versus raw is very different.  It not only comes down to the dynamic range of the footage, but more importantly the bitdepth.  I will shoot with a log picture style for most of my work if I don't have to/plan to push the image to hard in my manual grade in post.  If you push the h264 files to hard the colours will heavily band and you will get blocking artifacts due to the heavy compression.  RAW on the other hand is 14bit (depending on RAW settings) versus h264's 8bit, it can be worked very hard before breaking down.  Personally my workflow is to take the RAW MLVs colour balance them and convert them to an arri standard log with some noise reduction before outputting them in Cineform, a GPU accelerated codec that outperforms ProRes and is resolution agnostic, to use in my NLE where I preform my final grade.

Sent from my SM-G930W8 using Tapatalk

I wasn't talking about fake log color profiles for H.264 (IMHO you're better off trying to get the most out of your 8 bits by exposing your shot properly and using standard/neutral profiles, and getting your utilising all 0-255 data values. The sacrifice to color tones is just too great to force log into 8 bits).

What I'm referring to is converting 14bit RAW to a 10/12bit Prores using a log colourspace such as VisionLog / Cinelog-C via ACR or other method, which seems to be the workflow of choice for some people on this forum.
Real high bit depth Log colourspace files like this (like their in-camera counterparts Arri Log-C / BMD Log etc) are really not so different to RAW at all when it comes to grading latitude. But of course we don't have that option, we can only go H.264, MLV, or MLV->Log Prores.


And yeah..... if you're a FCPX user you'll know that Prores is lighting fast. I can have 4+ layers of 1080p with FX all in butter smooth realtime. Prores and FCPX were designed for each other though, if you're using Windows or another NLE, YMMV.
If Cineform can beat that, I'd like to see it.

Sidenote: With the trend pointing at universal adoption of H.265 HVEC in GPUs and camera SoCs, things are about to change rapidly for codecs. With an All-I (GOP1) highbitrate 10bit 444 H.265 file rivalling Prores 444 in quality and edit speed, at a fraction of the file size, it's not hard to see where things are going.
These new chips from Ambarella (https://www.ambarella.com/news/94/122/Ambarella-S5-IP-Camera-SoC-Delivers-4K-10-bit-H-265-Video-and-Multi-Imager-Capabilities-to-the-Professional-Surveillance-Industry) means even tiny action cams will be encoding this on their hardware very soon, and you GPU will be decoding it realtime back at the computer. That will be hard for 3rd party codecs to beat!
#30
As I understand it, for cameras that can record Log natively, it can provide most of the dynamic range benefits or RAW, whilst keeping the convenience of single file clips and space conservation of [barely] compressed formats like Prores. (From here on in 'Prores' is interchangeable with your intermediate codec of choice)

But being that ML users must shoot RAW to get these benefits, by the time we get MLVs into a usable format, we've already had to deal with (to varying degrees deepening on model) space/time/res limitations and extra workflow steps involved with shooting RAW. So as we're not really able to cash in on a lot of the conveniences that a true cam->NLE log workflow provides, I'm interested to know why the people that are using things like VisionLog / Cinelog-C in their workflows choose to do so.

I assume some people would like to go Log-Prores to keep dynamic range available whilst maintaining the playback/scrub speed of Prores in NLEs like FCP.
If so do you....?

1. Drop a Log->Rec709 LUT on your Log clips (or output/timeline) by default so you're viewing as Rec.709, and do cc/grading before the LUT in the chain?
   This would obviously have the benefit of allowing clipped areas to be recovered if needed, but should otherwise look like the Log step never happened. Other than that, is there also a specific benefit to applying color correction while in the Log colourspace as opposed to filming 'flat' (contrast/saturation reduced) Rec709? Another way to say this is: If I'm displaying Log footage through a Rec.709 LUT, but adjusting curves before that LUT, the function curve is affecting a very different set of values than if it were after the LUT. Do people use this to advantage? What the theory?

2. Drop a "Film Look" type LUT on the log footage that has the Log->Rec709 transfer built in?
   Since I've been playing with Log and thinking about it, this should effectively give the LUT access to values that lie outside the range of rec.709 and presumably the opportunity to map those values into more a pleasing highlight rolloff. In testing though I can't say such a subltle difference justifies the extra steps and CPU load.

3. Begin grading directly from the Log footage, working contrast and saturation back in manually?
I hear a lot that log is just an intermediate step for maintaining DR, and not intended for output. But I swear a lot of new TV series with flat look resemble Log footage with the blacks pulled down and some sat in the mids. Is this the case or am I just misinterpreting a stylistically flat, but not necessarily log grade.
#31
Chiming in to add/confirm reports on 550D every 2nd frame is a copy of the last frame, except the top 3rd which looks like current frame, only shifted on X axis.

If I can contribute to testing in anyway, gimme a job! 550D is 'B' cam, but love to get more than a couple of seconds of Raw at full res.
#32
Great app, I'll be looking forward to trying it out when OpenGL is implemented.

Some suggestions for making this app killer for fast workflows.... if I may, ahem.

-Left pane browser (instead of '+') with folder favourites. See DaVinci Resolve for good example.
-Some way to tag MLV files. Using the standard Apple Finder tag meta would be great so it would reflect in Finder. Just the color tags would do.
-Comment files using the standard Finder comment meta.
-Delete MLV files from the app.
-Quickview plugin. I've experimented with this before, and looks like best we could hope for (since 10.6) would be a single frame preview. But when logging and organising 100s of MLVs, even a single poster frame preview would be a godsend. Seeing as you are using AVFoundation already, this should be perry easy.
#33
QuoteNonsense, your post is what is a joke. MLVFS is extremely fast, and myself (and countless others) get real-time speed. If you are having issues, where is your bug report?

Well in my experience with Resolve, I could get very close to realtime playback from DNGs exported to a real (non SSD) drive. When I tried this with the same DNGs directly from the MLVFS virtual drive, I could get only barely a few fps which wasn't usable for me
I did not put this down to a bug, but rather assumed there must be too much extra overhead from the virtual file system converting between MLV and cDNG 'on the fly'.

I havne't got MacFuse installed right now so I can't retest, but if you are saying that no body else notices any different in speed between real and virtual files, I'm open to it being a specific issue with my system.

The only thing I can think of is that, running on a 1G Radeon HD6870 didn't give enough hardware acceleration for Resolve so was maxing out CPU.
In that case maybe the added CPU overhead of the 'on the fly' conversion was just too much.
#34
In my experience...

MLRawviewer
-Fast / Stable / Crossplatform . Great for quickly going through a card full of MLVs as you can preview and export only what you need very easily.
-It's mostly keyboard controlled though which can be annoying if you forget the keys.

MLVFS
-More thorough options for things like Dual ISO and be-banding
-Great idea in theory using a virtual file system, but using it in this way is painfully slow. Forget working in Resolve like this. You'll have to drag the DNGs to a real folder to get reasonable speed. If you're doing the AE/ACR method, you'l be used to a painfully slow conversion process anyway so you may not notice.

If you want a fast workflow while still keeping 14bit raw, you need to get the cDNGs to a real folder on a hard drive, preferable SSD.
You can either do this by just going through each shot in MLRawviwer and pressing 'E' to add to the export queue.
Or, using MLVFS and dragging the cDNGs from the virtual drive created by MacFuse to a real drive.
If you have Resolve configured for performance, they should lay in realtime without conversion.
DON'T attempt this using just MLVFS's virtual DNGs though, it's a joke.

I'd say, unless you have banding issues or are using Dual ISO, just export through MLRawViewer. It's a lot less steps to get cDNGs and no 3rd party software to install (MacFuse).


#36
Modules Development / Re: Full-resolution silent pictures
December 24, 2014, 08:41:32 PM
@bookemdano. I've made a slider with an arduino and stepper that moves a specified amount, then fires the half shutter (operated via iPhone app). I'd been thinking recently that the same design would suit telecine very well. You just have to swap the AC motor for a stepper then fps and sync are non issues.
#37
Hardware and Accessories / Re: Remote joystick
December 12, 2014, 02:45:14 AM
Theoretically, any function supported by EOS utility could be controlled over the USB port if you got busy with an Arduino or similar (https://www.youtube.com/watch?v=Rr_gNl3NzK8), otherwise the only external inputs you have access to are half shutter and full shutter press via the remote cable, (+ face sensor and audio trigger) so the magic lantern features can only be controlled by those.
#38
General Chat / Re: NEW H.265 CODEC
December 11, 2014, 11:01:02 AM
^what he said :)
#39
General Chat / Re: NEW H.265 CODEC
December 10, 2014, 03:09:16 AM
I've just learned that h.265 AND h.264 can support 10bit color. Pretty annoying camera dev's aren't utilising it.
#40
Raw Video / Re: qDslrDashboard on iPad
December 02, 2014, 12:19:20 AM
Doesn't work over USB, only wifi for cameras that have it (no video). Best at the moment is dslrcontroller for iOS as it's almost as fast as hdmi but it's for jailbroken devices only and it's purely a viewer, no controls yet. And it doesn't work with raw. Afaik raw disables this h.264 output stream as it doesn't work with the canon eos utility either. So someone would have to make a modded version of the mlv module (likely sacrificing data rate)
#41
Raw Video / Re: ML .RAW plugin integration OS X
November 28, 2014, 09:06:56 AM
Considering the wide support for ARRIRAW, batch converting from MLV to ARRIRAW would certainly handy workflow solution though it's
hard to tell if it's open on the input side (encoding). Probably not as it would be counter productive.

Does MLV use any form of encoding? I wonder how different the data is besides headers and data structure. (ML module possible?)
Then u have native QT including thumbnails and quick look, native FCPX support, native Resolve support and plugins for pretty much everything else.

From the website:

QuoteIs ARRIRAW an open format?
We recently submitted the ARRIRAW format and header specifications to the SMPTE, requesting the initiation of a new Registered Disclosure Document (RDD). Once the document has been accepted, the SMPTE will publish the RDD on its periodic distribution media and on the SMPTE website. This will ensure that the format is open to anyone who wants to develop applications around it.

QuoteARRI offers a software development kit (SDK) for ARRIRAW processing that software vendors can incorporate into their application. ARRI also supports vendors who wish to implement the ARRIRAW processing procedure on their own. Depending on the implementation, the following processing settings can be adjusted:

Exposure Index (ASA rating).
white balance.
tint (green-magenta shift).
current or legacy debayering mode.
output color space for HDTV, digital cinema (P3), ACES and Log C wide gamut.
different standard aspect ratios.
output resolution.
sharpening of the image output.

With the ARRIRAW SDK, you can also apply a custom ARRI Look File, which offers primary adjustments through printer lights, saturation and additionally lift/gamma/gain (or slope/offset/power).

No info on where to get such in SDK though. I imagine they expect direct contact from developers.

#42
Raw Video / Re: ML .RAW plugin integration OS X
November 27, 2014, 01:38:46 PM
QuoteI would like to see a ArriRaw type file support which by the way FCPX has native support with Raw adjustments.

http://www.arri.com/camera/alexa/learn/arriraw_faq/

Looks like they made it pretty easy for Apple with an SDK and all. MLV would likely need the same though Apple's not likely to want be seen supporting it when you consider their stance on modding firmwares.
#43
Raw Video / Re: ML .RAW plugin integration OS X
November 21, 2014, 12:43:31 AM
Eventually I hope we can get some sort of native support in fcpx  and davinci. MLVfS is a great idea but I can't preview in real time in davinci with it so I don't find it practical. Batching to cdng or proves with MLRV  is still works best for me.

I do really miss quick look though. I have my thumb permanently resting on he space bar when I'm working as I find instant preview integral to fast workflows when rummaging through files.

Here's a quick look plugin that already supports most of ffmpeg via vlckit so it just needs the mlv implementation in ffmpeg to mature and then our mlv icons will show snapshots. See here for the devs comments https://github.com/Marginal/QLVideo/issues/10

No video unfortunately due to aforementioned avfoundation migration but there's chance it could work in a hacky way if we can work out how to find the memory address of the quick look window.
#44
Quote@TequilaKez If you press C, it will process all the MLV files in the same folder as the one you loaded.

Damn it, it was there! I looked it over a few times but i guess i was scanning for the word batch or something.
That's great.
Still, the droplet like functionality would be even better:)
#45
Raw Video Postprocessing / Re: Editing and CC in ProRes
October 29, 2014, 11:02:35 PM
Well I'm hoping someone can chime in here swell as I'm not exactly sure what they all are!
Eventually Im hoping we'll be able to use visionLog etc which are specifically designed to get the most out of Canon sensors.
Until then, if you're going for the flat log look anyway, just choose one that looks good.
Or if you intend on going back to a more contrasty look and just want the grading latitude, choose one that has a corresponding 'undo' LUT in resolve.
#46
Would it be easy to add a batch feature so we could drag multiple items and have them added to the queue with the current settings?

EDIT: and draggable to the dock icon swell as the window? XLD is a great utility for transcoding audio. i just drag a bunch of AIFFs on the dock icon and Apple Lossless versions magically pop up on my desktop. Same with Compressor droplets. Great for speedy workflows!

#47
Raw Video Postprocessing / Re: Editing and CC in ProRes
October 29, 2014, 04:14:13 AM
Yes, I've gravitated to this workflow as well.
Filming normally in in H.264, on top of only 8 bits, you suffer very low resolution color (4:2:0 chroma subsampling), nasty H.264 artifacts and usually clipping at either or both ends.
RAW->Prores gives you full color res, virtually lossless, and if you use one of the log curves via MLRV, you can retain nearly all of your highlight and shadow info.

Cinestyle / Marvel profiles were a good idea but if you've ever tried CC with them you'd know that log and h.264 don't work very well together. We're trying to compress info into the middle of the color range so we can play with it later, but H.264 was designed to discard all that extra info as it's barely visible to the eye when viewed as is.

Even at 10bit it's a huge advantage over standard mode.

#48
That last image looks really nice.
How much do you think the Tiffen filter contributed to the look?
I'm assuming its a physical filter and not the plugin?
How many light sources?

#49
Decided to start another thread as the forum suggested it, but it's really an extension of this thread.

http://www.magiclantern.fm/forum/index.php?topic=8389.0

Appologies If someone's gone here already but I couldn't find it.

Anyhow I couldn't come to a conclusion about what Prores format suits best, and it seems opinions vary.
Do we use 422 and not care about chroma resolution loss as the difference with be negligible?
Or keep all the data and use 4444, possibly wasting space as we don't need alpha?

Well I did a test by bringing in a DNG sequence into AE and using 'Set Matte' to create some alpha.

RGBA -> Prores 4444 = 140.2MB
RGB -> Prores 4444 = 78.4MB     
RGB -> Prores 422(HQ) = 66.1MB 
RGB -> Prores 422 = 49.4MB

RGB-4444 %55 the size of the RGBA, but RGB-4444 is only %18 larger than it's RGB-422HQ counterpart.

So my conclusion is. the lossless compression of the alpha channel is not wasting space.
May as well be safe and use 4444 as the size difference is only marginal over HQ.





#50
@Andy600 thanks. I have a basic understanding of what log does, except the clog/logc ei stuff but thanks for laying it out for me. With regard to the exposure slider in Mlrv, it's a handy tool if exposure wasn't set properly when shooting but I try to make a point of not to using it. I also see the most efficient workflow being dragging a bunch of MLVs onto MLRV to batch them all with the current gamma curve which means no chance to mess with it per clip!
@baldand: any chance of drag and drop batch functionality coming soon?
Being able to fix WB is also nice, if we ever get quick look thumbnails we could identify any to leave out of the batch and add to the queue separately with tweaked WB.

My layman's view of the workflow concept is this. rec709 h264 out of the camera suffers 3 main issues.
1. Discarded detail and compression artifacts due to h.264
3. Major loss of chroma information due to 4:2:0
2. Highlight and shadow information beyond the white and black points are clipped and non recoverable

The prores 4444 codec fixes the first 2, and if the 14bits are scaled(compressed) down to 10bit (or 12 with XQ), that fixes #3. We loose some bits, but the gamma curve prioritises the most useful parts of the information so we retain more bits where we need them.
Of course if we view the entire sensor data compressed like this, it's going to look extremely flat, but we can use a 3d LUT to expand the range back to rec709 etc. to view it as it would have looked straight from the camera.
The important part being that this LUT be applied at the end of the processing pipeline so we still have all that highlight and shadow info to play with before it gets clipped again by the LUT.

The attractive thing about the visionLog curve is that the 3D output LUT could be one of their Osiris film LUTs and we don't need to go through another lossy input LUT step.

Am I on the right track here? The BMD color space fits in there somewhere but I'm not sure I understand where and why:)