Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Kharak

Pages: [1] 2 3 ... 41
Raw Video / Re: AI algorithms for debinning
« on: July 20, 2021, 12:36:51 PM »
I think the LJ92 compression is closer to 40-50 % reduction, depending on brightness of scene.

The R5 raw is not raw, I graded and shot a lot of R5 footage and the lossy compression is evident in the shadow detail. The noise floor is really bad, barely any information can be harvested from the shadows.

The R5 8k "raw" has less latitude compared to ML RAW, but that is not surprising with the amount of compression.

if you use FPS Override then you can not have sound on h264.

You can test if proxy works with sound while fps override is turned off.

Is 50 fps proxy recording stable?

Reset ML settings.

I recall once having a similar issue with “vanilla” 3.5k crop. Don’t know what it was, but resetting to ML default fixed it

Also check your Dial, is it by chance on (A+) or any of the others?

I recommend Danne´s MLV_Dump Batch converter.

You can convert the Lossless MLV's to Lossless DNG, keeping the original size of the Lossless MLV's (NO data lost) and further losslessly compress it with Slimraw, like explained above.

I would be careful with running a Batch tool that deletes the Original MLV's as you go.

And secondly, I can not recommend  Resolve enough, also for further Batch processing. It is literally 100 times faster than AE. If I render DNG to Cineform without any effects, it runs at 100-150 FPS or faster, the bottleneck being the write speed of the drives. So for transcoding it is the obvious solution.

Or don't transcode and work straight on the DNG's.

I am a little confused with how you end up with twice the data as Lossless MLV ? Or do you mean, all data combined, like MLV + DNG?

If so, why not convert to Lossless DNG and do a 2nd lossless compression with SlimRaw to take off another ~12%. (Perhaps more). SlimRaw does a better compression than MLV_Dump.

SlimRaw can also do Lossy compression from 3:1 to 7:1 ratio and if for only rushes, set Resolution to Half for another 50% save. But you will have to keep on to your Source MLV for a Full Quality Conversion again in the future. This is if you choose to Override the DNG's with SlimRaw's Compression.

If you separate your Batch processing in to 200-400 GB chunks (or what have you) in separate folders, which I assume you already are, you can Compress as you go.

You can also Compress the folder you are converting to as it goes (a la Watch Folder), but SlimRaw is way faster than MLV_Dump, so make sure it does not overtake MLV_dump, because you will end up with parts Compressed of the folder and parts not and it can be a pain looking for where SlimRaw stopped and will require recompressing parts of the footage, especially if you did really long takes at a time. But you will see the Checksum file in the DNG folders where it has compressed the footage, so look for that.

So to be clear, SlimRaw has no Official Watch Folder Support, you can compress from the folder you are converting to, but if SlimRaw reaches the last DNG available in that folder before MLV_dump is done converting, SlimRaw will stop as it thinks it finished. So take care of that, therefore better to just do Chunks at a time in separate folders.

SlimRaw Generates Checksums and can run a Verification pass aswell.

You can even do Dual output for backup.

I probably sound like a SlimRaw commercial, but I am not not, I just really like that software.

But let me know what your process is, I am not sure what you mean about Cinelog-c when talking about DNG. Are you running an AE Workflow?

What did you fix with Darkframe Averaging

The ML Flagship is the 5D MK III.

It has the highest resolutions and slow-mo capabilities.

Best low-light off all the ML Cameras

CF and SD Card Slot (Requires a Minimum 1000x CF Card)

Continues Lossless 14 Bit RAW recording

Almost no Aliasing or Moire.

Its the obvious choice with your budget.

What is the benefit of first converting to linear? In Davinci, is that using the OFX Color Space Transform to linear, then again from linear to 709?

There is no “true” benefit, you can grade directly from linear space and get exact same results as grading a linear to C-log, rec709 or what ever conversion you need. You just have to pull it differently.

I just mentioned rec709 because most people think its the be all end all standard.

the mlv raw data is linear data, so if you want to do a color space transform to match with other cameras color space or you have luts that expect a certain input (they all do), the correct transform is Input: Linear to what ever color space you want. This is a great way to ensure all footage has the same starting point and you can apply your powergrade to all and the matching will be much easier, but mixing with non-raw cameras you would ideally create another node first for LGG and use the offset to maybe increase/decrease exposure to match better with your raw adjustments.

You can do the transform in RCM (resolve color managment) and for mixing other cameras you can right click the clips in color page and set the input color space for that that clip.

Or use CST when you like or if you want to manipulate some combination of color space and gamma for creative purposes, but it can all be done with the tools.

Edit: 1st its very important that you have a calibrated monitor or as close to it

2nd set your output color space and gamma in the render page to ensure you don‘t have out of gamut mapping. Also a good practice is to watch your final render on set device, so if you are delivering for television, check the footage on a TV, preferably many to find what your average product will look like (they will all look different depending on brand). Ofcourse, if for HDR watch it on HDR tv‘s, and when for internet, check on different conputer monitors, phones etc..

General Chat / Re: Any thoughts on this idea?
« on: February 20, 2021, 12:22:40 PM »
I don’t get it.. what is the new idea in this? Why not crop in post?

You have a preconcieved idea of the raw workflow that is wrong.

If you are clinging to some luts for your grading, then they will expect a certain input and then you have to bring the RAW footage down to this input. You can set input to linear and output to rec709 for a “normal” look. Or set output to any of the major log profiles.

But raw is raw, its 14 bit footage that you can do anything with and only your skills or lack of it in the grading suite will determine the end result.

And yes, imo the luts in resolve are for decorative purposes.

Without these profiles, it can be almost impossible to accurately colourgrade raw files coming from the 5d.
(there are many LUTs in da vinci resolve to provide these colourprofiles for all kinds of cameras, but not for the 5d mk iii, since it doesn't officially record raw video).

This is just plain wrong. Just set input to Linear and what ever output you want from there.

General Help Q&A / Re: Sounds familiar ;-)
« on: February 15, 2021, 09:54:58 AM »
I cant make sense of any of this.. Not the video or the article.

The examples shown in the video have blown out skintones, the Shadows are crushed.. the video in the end with the girl in the grass, when they combine it, just looks brighter and still has a ton of crushed blacks. Guy on the beach picture looks like the typical phone dr, barely visible details in the shadows..
Ofcourse, the video is a commercial, its just buzzwords, I actually watched it without audio the first time and the flashing buzzwords were comical, especially combined with those horrid examples (not the compositions, they were great). Yes the pictures are professionally graded as stated down in the corner, perhaps the pro thought it be best to hide the atrocities in the shadows, who knows.
 I am not trying to shoot this down, compared to samsungs previous phones, this probably is a huge step up, I don’t know.

I think the author of the article does not understand Dual-Gain output. In the article he says it is just Bracketing, even when he quotes from the video that it does two Gains at once. I don’t think it is bracketing, if it is, it makes no sense for samsung to claim it is innovative; they obviously know about bracketing, I think it is their own version of Dual-gain readout (see alexa, c300mkiii, c70). From the explanation in the video, I don’t see it as bracketing.

Since iphone 9 or 10, all apple mobile devices do multi capture(bracketing). The iphone (depending on version) captures a bunch of images in rapid succession and combines it in to one, great for DR, horrible for skin/texture. My old SE takes better pictures than the new SE, skintones are always a blob of yellow/orange, no texture at all in the skin or walls for that matter.
 I noticed immediately on the very first picture I made with the new SE and blamed it on NR. Now i know why.

Same for video, the new iphones take two frames per frame, one low and one high and merges. So you always shoot HDR.

If the Samsung actually does a dual gain output, then that would be a huge step up from the merging mess every other phone is doing, you get textures and no need to fear motion.

But as an end note, the colors out of every Samsung I have seen are the most “phonish” of them all. Ugly reds and greens.

Cool! Working. Enabled/disabled and did exactly what it´s supposed to(tested on 100D):

I do not have this option running crop_rec_4k.2018Jul22.5D3123 build.

I can do Config Preset - Startup Mode, after restarting no modules are loaded, but ML still boots.

Is the option hidden somewhere else? Am I missing something, do I need to load some Config from here on to my SD card?

Duplicate Questions / Option to hold SET to boot ML
« on: February 02, 2021, 10:03:24 AM »
I hope it’s okey I add this in this thread, I thought it would be okey as the thread mentions journalism in dangerous countries. And honestly adding it to feature requests section seems like a black hole. But feel free to move it if needed or tell me to.

This is something I thought about for awhile.

On camera startup, by holding SET ML will not boot, could it be possible to make an option in ML to reverse this, so that you need to hold SET to boot ML. This is not encryption perse, but ML in this case functions like the key to play/review the MLV’s.

The case scenario being that the camera is confiscated or you are required to show your contents to an “official” and you have sensitive MLV’s on the camera. Obviously having some “innocent” cr2’s and h264 files on the camera to mask the true contents.

Not sure how Canon reads a SET press at startup without ML booted, perhaps it requires that ML is partially booted.

This is not gonna stop someone from grabbing your Cards and finding out that those 50 pictures on the card somehow take up 90 GB space and “what the f*** is .mlv?”. But as a first line of defence and sweet talking, it can go a long way.

Let me know what you think.

Raw Video Postprocessing / Re: UGLY clipping samples
« on: February 02, 2021, 09:13:30 AM »
Try the same clips in Resolve, yes they will have pinkish highlights, but not as saturated/“artifacty” like your example.

I usually mitigate it by turning luma mix off and pushing green in to the highlights, easy fix. All in a RCM workflow ofcourse.

I cant find the thread by baldavenger, but he made a post about altering the dng’s in order to get perfect white and a full signal straight in to Resolve. In his workflow highlight reconstruction wont work anymore, so pulling highlight slider wont have any effect. Maybe it was a combination of one of his DCTL’s and dng manipulation. I can’t recall and I never tried it.

EDIT: How do your clipped blues look on the waveform? I assume clipped, try adding Green and Red, subtract blue.

RE-EDIT: Found the baldavenger post: Very good read.

RE RE Edit: The clipping is also heavily influenced by your color space e.g. Squeezing 14 bit linear in to a Rec709 color space will cause clipping and out of gamut solarization that has to be adjusted for. And the other way around setting Linear - to ACES color space which is a huge color space will cause pink highlights because the RGB Pixels are not clipping at the same level. Like majority of all sensors, the RGGB pattern will cause Pink highlights in the sun or Blacksun in BM's case(not sure about BRAW).

Forum and Website / Re: Magic Lantern Glossary: v0.8 ready for review
« on: January 30, 2021, 08:11:14 PM »
Ofcourse, I meant "can you add XXX (Xtra Xenon Xexy)" like one would add the entirety, not just another acronym that people would be lost in.

I dont know what you mean by
you are editing a wiki.

Do you mean "then you are adding to the wiki" ?

Cause I cant do any editing on the glossary.

And I do suggest the Glossary is added to the Front Page or a huge fat link to the top of the forum.-

Edit: Found it under User Guide, I am probably more blind to that than newcomers.

Forum and Website / Re: Magic Lantern Glossary: v0.8 ready for review
« on: January 30, 2021, 06:03:45 PM »
Thank you,

This was very much needed!

If we find other acronyms that are not in the glossary, can we post them in this thread, be it a question "what does ARM stand for"? and "can you add XXX" ? And you add them to the glossary? (if deemed relevant)

My apologies, thought it was you posting about the Spam filter.


Raw Video / Re: History of ML cracking RAW
« on: January 24, 2021, 10:14:25 PM »
Vogonism, thanks I learned a new word today and I fully understand it from the ML perspective.

I was IP banned today. I do use VPN, but had to look long to find an IP that would allow me in. I also could not log in to report the issue.

@Walter, I appreciate if you maybe limit your Spam IP list or narrow it down somehow.

Raw Video Postprocessing / Re: Vertical Lines on under exposed dark areas
« on: January 21, 2021, 12:33:34 PM »
What you are referring to is FPN, Fixed Pattern Noise.

Neat Video takes care of some of it.

Darkframe Averaging also removes some of it.

But in the end you are digging in the noise floor, i have in-house noise profiles that clean up the noise floor perfectly, except for the FPN, because the FPN is “detected” as part of the scene. I believe the FPN could be removed/interpolated in to the image again, with the right algorythm, because in most cases the FPN is a row of pixels being slightly under or overcharged compared to the neighbouring rows. But I am not capable of creating anything like that. I keep hoping Neat Video will add some kind of FPN remover, we will see.

So VFX it is, using Mocha if you really need to get rid of it.

And of course always add noise (Grain) back in to the image to dither the details and in some cases hide the FPN.

Raw Video / Re: Raw video on 5DMK2
« on: January 11, 2021, 09:19:54 PM »
Hey everyone, I have been using my 5d2 with an external monitor and I have found that the records times are slightly reduced.

I have Global draw set to turn off when recording and I don't have the sound module.

In other peoples experience, is there any settings I can tweak to squeeze some more performance out of it?


You can lower the resolution slightly until you reach continues or satisfactory record times, but this will crop the image relatively ofcourse

or lower the Bit Depth.

Scripting Corner / Re: MUlti Shot Image Capture script
« on: January 11, 2021, 09:16:27 PM »
Focus jiggling is a non proven way to do super resolution and based on the 'assumption' that moving the lens focus between images may (sic) create small image to image movements at the sensor pixel level, eg a few microns. As I say, this remains unproven. However, if you can't carry out 'normal', handheld super resolution bracketing, eg because of shutter speed, then focus jiggling may introduction enough image to image jitter to allow you to carry your super resolution processing. If it doesn't, at least you will be able to reduce the noise in your image, ie by sq.root(n); where n is the number of images taken

I think this sounds very interesting, but I am not getting my head around this "jiggling", to me, it's a guy gently kicking his tripod. Not sure if I am understanding this right, but do you shake the camera a little? As in artificially creating a Pixel-Shift image?

Can you link me to some article explaining this practice?

Academic Corner / Re: Magic Lantern usage in academia
« on: January 08, 2021, 02:12:24 PM »
Also military, spoke to a Recon and he used a 5D3 with ML for his reconnaissance work.

General Chat / Re: Raspberry Pi High Quality Camera
« on: January 08, 2021, 02:10:32 PM »
Thanks for clarifying.

I borrowed a Pi once from a friend, but it was only for watching films on the TV.

Pages: [1] 2 3 ... 41