Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - bpv5P

Pages: 1 [2] 3 4 ... 7
26
Share Your Videos / Re: 60D 60fps RAW Test Video
« on: October 28, 2017, 01:51:35 PM »
I have a Canon 60d so limited by the buffer, I was able to get 1728x736 but 720p raw is far superior to h.264 1080p?

Well, you should test it... 1728x736 in raw, interpolated to 1080p, will have better overall quality than highly compressed native h.264 recording. If you're worried about storage space, you can process the raw, save it in a lossy compressed file such as HEVC 40MB/s (Log) and then delete the original raw file. It will still be better than h.264 in terms of SNR, dynamic range, color accuracy, etc.

27
Camera-specific Development / Re: Canon 600D / T3i
« on: October 28, 2017, 01:01:28 PM »
Install instructions in download page are up to date.

There's any changelog for ML pages? This is the page? Also, can't the admin self-host the original canon firmware link, instead of pointing to "pel.hu"? I don't find it very safe to point to a unknow website. The owner could just replace the zip with a exploit and *uck everyone. If for legal reasons it can't be made, at least mirror it in another server and post the SHA256 hash...


edit to add: it's running a old Apache with bad SSL, btw.

28
General Chat / Re: First paid shoot, HELP!
« on: October 27, 2017, 07:29:30 AM »
What should be fixed and why it should be disabled?!

That's a communication problem. I thought fps_override was not working at all, but that's not true. You were refering on the first comment to the 24 > 60fps, that obviouslly would not work.
And yes, indeed, it's very useful for timelapse and stuff like that.
Anyway, sorry for the noise.

29
General Chat / Re: First paid shoot, HELP!
« on: October 27, 2017, 05:38:34 AM »
That usually won't work. If you need 60 FPS, you should select it from Canon menu and leave FPS override off.

Yes I should've said that too.
But, wait, why is fps_overriding not working? Can it be fixed? Or even disabled, if it's not working...
I used to use it for enforcing exact 24fps (instead of 23,976 ntsc standard), but now it doesn't makes sense anymore.

30
Share Your Videos / Re: 60D 60fps RAW Test Video
« on: October 27, 2017, 02:30:53 AM »
Hi Rob Curd, sorry if I haven't answered you... for some reason it didn't show me your reply on my account (or I just forgot to answer).
Anyway:
Quote
I feel like shooting two whole stops over would leave me with a near white image in the sun ha.

Yes, but only if you "clip" your highlights. You should use raw histograms and check if it's not clipped (out of the histogram). If it's not, then you can recover on post-processing using software...
ETTR is great if you know how to use, it will generate images with much less noise and higher dynamic range.

Quote
All very time consuming! Ha

Yes, your post-processing is not ideal one. I would suggest you use MLVProducer, it's very simple to use.

Quote
I need to change my sequence setting to 2.35:1 but just export to 16:9?

Exactly, that's the right way to do that...

Quote
is it a setting with mlv raw or another module I need to load?

It's like a "other version" of Magic Lantern. You can download here:
http://builds.magiclantern.fm/experiments.html

Quote
Thank you for being so generous with your time

No problem, you're welcome.

31
General Chat / Re: First paid shoot, HELP!
« on: October 27, 2017, 02:18:28 AM »
The scene is interpolated (artifacts at 12s). It has a "plastic" looking, probably because of strong noise reduction. The lens is probably of about ~85mm. The aperture is probably about f/1.8. The color grading is terrible but, well, it's the normal high-contrast teal and orange stuff.

What I would do:
- First choose if you'll record in raw or normal h.264. If your camera and card support high fps recording with raw, I would say "go for it", since the quality will be much better
- Turn-on the fps_override. Put on high_fps mode and 60 fps
- The shutter speed should be at least 45 degree (1/480), so it will have less motion blur and have a better effect after interpolation
- Open your lens to the maximum (f/2.0 will do the job, although a f/1.4 would be great)
- Adjust the ISO according to the scene (you can get more light from some open window in the room, no need for artificial lighting). I would also suggest you expose to the right (let the scene more exposed than normal and in post-production you reduce it)
- Adjust white balance

Then, after you record, go to post-processing:
- If you recorded in raw, use CinemaDNG conversion and put on some software that support raw decoding (After Effects, Resolve, etc)
- After that, you should use a denoise software, to get this "plastic look". NeatVideo works ok
- Interpolate the images using Twixtor or some other in-frame interpolation software
- Save in a lossless format (ProRes, DNxHD, Cineform, Lagarith, etc)
- Put on Premiere Pro and do the color grading using LUT's on Lumetri (the Ektar 100 lut from hyalinejim seems to work well for raw images, although I haven't tested it myself - ImpulZ always work for me on h.264 images)
- Export using h.264 mp4, using bitrate of about 10Mb/s CBR

32
Post-processing Workflow / Re: Encoding for UHDTV and ACES workflow [?]
« on: October 26, 2017, 03:26:59 PM »
Here's the biggest data set I have found online: http://www.gujinwei.org/research/camspec/db.html (click on the database link). Remember, this is someone's research so you would require permissions from the author and The University of Tokyo to publish anything that uses the data. It is also incomplete for the purposes of rawtoaces and would require extrapolating between 380-400nm and 720-780nm i.e. not ideal when you are dealing with such precision.

There's some good data right there. Thanks for sharing.

Quote
I don't know. Ask them

Actually it can be that simple. I know some projects that just asked the company and they provided the information. A example is Debian and OpenBSD wifi drivers. Ralink just gave the data to them.
I don't think this will happen with Canon, since we know the kind of business they do, but we could try.

Quote
Ideally this should be done individually for each camera

Yes. I think the hardware manufacturing itself should allow some kind of  "checksum" of each sensor and automatically generate a metadata file that can be available for download with the serial number of the camera :P

Quote
for reasons of sanity :), tend to stick to currently supported standards

haha, good point.

33
General Help Q&A / Re: Multicamera shot, the risks for the production?
« on: October 26, 2017, 11:53:33 AM »
I'm not @reddeercity, but more reliable power sources will take no effect on ML stability. The CompactFlash card will, though. I would suggest you to have backup CF cards (with the same ML files on them), so if one of them get error or goes full, you can just put another and you're good to go.
Since it's not very stable, redundancy is necessary.

34
Share Your Videos / Re: FRIENDLY MANITOBA Photoshoot
« on: October 26, 2017, 11:46:13 AM »
Walter, vimeo is working with HTML5 for some time now... are you sure your browser is updated? :)

I think people having this issue are accessing ML forum behind some firewall blocking youtube/vimeo or with a browser using some kind of iframe or script blocker.
If someone reproduce this error with a fresh install of Firefox on a non-firewalled network, then we have a issue. It doesn't seem to be a forum software (SMF) problem, though.

35
Post-processing Workflow / Re: Encoding for UHDTV and ACES workflow [?]
« on: October 26, 2017, 11:34:38 AM »
Sorry for the late reply Andy600:

Spectrally derived color rendering is much more accurate but essentially rawtoaces is currently doing what DCRaw, Resolve, After Effects etc can do but with the addition of using QE sensor data if available.

You said you're trying to add QE data for Canon models, but using "found research", since that to really construct these you would "need access to a monochromator in laboratory conditions", right?
May I ask you how are you finding these research? Some of Canon sensors are from Sony (not all of them, maybe the Axiom guys know better), from what I know, maybe they have specifications on some website, but I think that it would be possible variations to be introduced even in between sensors from the same batch in the same factory.
There's also other points, for example, the demosaicing algorithm would add color artifacts (by the way, what I've read about Foveon X3 made me very hopeful that, in future, industry will adopt those). And we are not even thinking about the physical factors, such as the distance people are watching from these images, the ambient luminance, etc.

Anyway, if you figure it out the QE data of some canon cameras, will you release this data as open source in future?

Quote
There are new ODTs in the works covering smartphones, iPads, new TV technologies (Dolby Pulsar etc) and this will only make life easier.

Yes, that would be very amazing.


To refine my other comment above:
Quote
I'm still waiting for the big players to look into Daala+Opus, though.

I did some research and they are actually looking into it. It's called AOMedia Video 1 or AV1. It already outperforms HEVC and many others. Here is a comparison:
http://wyohknott.github.io/image-formats-comparison/lossy_results.html

And here, using SSIMULACRA and google's butteraugli:
https://encode.ru/threads/2814-Psychovisual-analysis-on-modern-lossy-image-codecs?p=54616&viewfull=1#post54616

The project called Pik is trying to go even further, although the techniques applied on AV1 seems much more interesting than build a compression algorithm just as a convergence to a constructed psychovisual metric (butteraugli), in my opinion.
The FLIF seems a good algorithm for lossless compression. It could be used for a future digital intermediate format, better than the (now open sourced) Cineform or ProRes (I don't know if the decoding speeds are higher enough for this, but that would be interesting to try)...

36
Share Your Videos / Re: FRIENDLY MANITOBA Photoshoot
« on: October 26, 2017, 10:39:34 AM »
Can you post a link, please?  I don't see your video.  Thanks.

Why there's so many people having this problem lately? Are you guys sure your browser is not blocking iframes? Someone should open a thread specifically about this...

37
General Chat / Cineform goes Open Source!
« on: October 26, 2017, 07:50:06 AM »
Cineform goes Open Source!
The official blog post and the software development kit.

Quote
One reason for keeping it closed, you might be surprised to hear, is that the codec's core tech was very simple. The codec idea was sketched out on a single piece of paper and the performance was determined first by counting the number of Intel instructions needed using an Excel spreadsheet -- even that fit on a single page.

38
DNG is already a RAW format. Use it if you can. DNG has better lossless compression and is a open format.
The cr2 is a proprietary, closed, format from canon. I don't even know if there's a tool to convert DNG to it...
Use mlv_dump or any other tool listed on forum.
If you really need to convert RAW data to cr2 for some reason, I'm can't help, sorry.

39
Share Your Videos / Re: FRIENDLY MANITOBA Photoshoot
« on: October 25, 2017, 10:57:07 PM »
Very cool!

40
General Help Q&A / Re: Multicamera shot, the risks for the production?
« on: October 25, 2017, 10:06:40 PM »
Not very stable, but if you have other backup 5D3, I personally would go for it. The money you would be wasting on a high-end camera can be used for other things (such as a anamorphic lens, neutral density filter, polarizer, etc).

41
How are you doing the scan hyalinejim? I did some research some years ago (while working with analog photography), the best process was:
- Very a photo in a color chart
- Get VueScan
- Adjust WB based on 18% grey:
http://www.hamrick.com/vuescan/html/vuesc12.htm#topic6
- Put the VueScan on  highest settings (resolution and dpi)
- Scan on normal exposure using DNG:
http://www.hamrick.com/vuescan/html/vuesc15.htm#topic9
- For better quality, you can use "multi-scanning" (although I personally would do it manually, using different scan exposures and then blend with HDRMerge):
http://www.hamrick.com/vuescan/html/vuesc23.htm#topic17
https://jcelaya.github.io/hdrmerge/
- The use of anti-newton glass and wet mounting can facilitate the scanner focus:
http://www.betterscanning.com/scanning/usinginsert35.html
http://www.betterscanning.com/scanning/msfluid.html

That way you get a pretty raw scan.  There's also other variables that can change the film color, such as the development process (C-41?), push-pull and if it's expired or not...

42
General Help Q&A / Re: Vertical lines on .mov video (not raw) using 1100D
« on: October 21, 2017, 09:08:20 PM »
Have you tried to shoot without magic lantern? If you did and the error persist, ML is not causing this problem.
Check if you're recording in progressive frames (for example, 720p not 720i). Check the bitrate settings on ML (if you haven't changed it, the value should be 1.0x cbr).
It seems to be some issue with debayering, are you sure it's not RAW recording?
If nothing works, remove magic lantern from card. Format the card. Update the original canon firmware. Go on menu > reset settings. Reboot the camera. After that, the camera should be as before you installed ML...

43
Great hyalinejim, I've yet to try these ones with ml-log 1.3.
I think it would be a good idea to keep the opening post updated with the last version, so people that fall here from google in future don't need to read all the thread just to get the luts...

44
Raw Video Postprocessing / Re: ML-Log: new log profile for Magic Lantern
« on: October 21, 2017, 08:03:35 AM »
Ok, but them it's not code and GPL is not the proper license to use. Maybe CC0 would be better applied...

45
General Development / Another idea
« on: October 21, 2017, 03:56:53 AM »
Another crazy idea (from my idiot series of ideas).
Before suggesting this idea, I'm sure it's not simple to implement and I'm not asking anyone to do it, it's just a hypothesis (again).

A "cinema mode" script. It could adapt for common configurations, such as 180 degree shutter speed, 24fps and so on, but also adapt the MLV module to the best possible resolution. For example:

1. Script turn-on modules (MLV, AETTR, Dual_ISO, adtg iso)
2. Draw (using the GUI hacks), a message as "Reboot the camera and don't turn off after reboot"
3. After reboot, do a automatic card benchmark
4. Using the data of the benchmark, adapt the MLV module to the highest possible resolution for realtime recording (including bitdepth reduction, if necessary)
5. Put shutter speed to 180 degree
6. Put frame rate to exact 24.00
7. Ask user if he wants to use dual_iso (if "yes", then take it into consideration on step 8 )
8. Reading the scene with AETTR module, adjust ISO to the best SNR possible (following the adtg discussion)
9. Ask for manual white balance adjust (using reference picture or just using kelvin temp.)
10. Write this configuration to ML, for fast switch.

As a1ex said on the other ridiculous idea I've wrote here, configuration cannot be changed easily, but, hey, it's just a idea :)

46
Post-processing Workflow / Re: Encoding for UHDTV and ACES workflow [?]
« on: October 21, 2017, 02:57:36 AM »
It depends. You can certainly play with it and it's not limited to single frames. See: https://github.com/miaoqi/rawtoaces/tree/feature/DNG#usage

Ok, but the document doesn't specify the output format. Is it a debayered OpenEXR?
I don't have a debian system here to compile it, but I could do it and create a binary for everyone, so we here from ML forum could play with it..
Also, If it is a OpenEXR, when I load it on some software (Resolve, for example), the working color space will automatically be ACES (it's on exr metadata) or we have to configure it? If not, what's the purpose of this conversion (if not for the better acuracy of "camera spectral sensitivities and illuminant spectral power distributions")?

Quote
For archiving yes but not for commercial deliverables. HEVC, ProRes HQ or DNxHD/HR maybe but not XQ.

Thanks. HEVC seems a good option for streaming and the other ones better for non-streaming.
I'm still waiting for the big players to look into Daala+Opus, though.

Quote
Correct. They are mostly built for or are relative to Rec709 display.

But, that's the point. If the final product will be displayed on a Rec.2020 space, but the LUT is already limited to Rec.709, use this LUT is a waste since the final product will not use all the capability of Rec.2020, would it?

Quote
I wouldn't say it's a waste as such but you have to ask yourself why not stick with YRGB workflows if you want to use lots of luts? If you are making deliverables for several outputs then ACES is a useful color management system and luts, although limiting, can still be used.

Yes, but I'm talking hypothetically here. I personally don't need all this quality, because most of our work our clients upload on screaming websites and the final users watch on cheap smartphones, but I try to look from time-to-time what is the best workflow today (taking into consideration the relevance to the workflow, not the 32bit 1% red shit we discussed on other threads).

Quote
The statement about not baking luts into ACES means simply not baking the look into the EXR files. The look can be baked into the final output.

I understand, but this is a view from the set production that needs realtime view of the "close to the final" product, right? My questions are based on the use of LMTs as a tool for final color grading process, not as a realtime look...


ps. Sorry if some of you don't understand things I write, english is not my mother language...

47
Raw Video Postprocessing / Re: ML-Log: new log profile for Magic Lantern
« on: October 20, 2017, 11:46:26 PM »
Hey hyalinejim and Danne, what do you need to port it to 50D and 600D? I have both cameras, maybe I could help with some data. I don't have a color checker, though. I see you're already working on 100D release.
Also, GPL license only applies if the code is open, but I can't find the code. Wouldn't be better to use github instead of bitbucket, btw?

48
Share Your Videos / Re: 50D RAW with cheap yongnuo lenses
« on: October 20, 2017, 10:12:18 PM »
Oh man, very cute, an "Into the Wild" atmosphere in this. There's some issues with white balance in the first scenes, but overall it's great.
Never heard of Yongnuo though. I use a old Takumar in 50D, it gets good results with MLV too...

49
Post-processing Workflow / Re: Encoding for UHDTV and ACES workflow [?]
« on: October 20, 2017, 09:39:57 PM »
Also, Andy600, the rawtoaces tool seems to work only on DNG (besides cr2 and nef), so it would be needed to automate it to work in a folder of dng (potentially already processed with raw2dng), right? How would someone produce a sequence from these separated EXR? I don't know how these work, that's why I'm asking these questions...
Thanks for all the information, btw.

50
Post-processing Workflow / Re: Encoding for UHDTV and ACES workflow [?]
« on: October 20, 2017, 09:31:09 PM »
Thanks everyone contributing, great information here.
So, to compile the information (according with Andy600):
- IDT is not necessary in raw data
- rawtoaces is not ready for use yet
Also:
- ProRes 4444XQ is a good codec for Rec.2020 (although I think DNxHD also is a good option)

Quote
There is nothing wrong with properly built luts and for film emulation there is currently no other way to encapsulate color crosstalk of film without sampling it and building a lut from the data.

Ok, but these current 3dluts that have widespread use are not constructed to be ACES-to-ACES, right? So, use them together with ACES would be a waste, since the Luts would be limited to other space (?). Someone from academy explain this here (read on "matching LUT X"). But, as he stated:
Quote
"Because empirical LMTs are derived from output-referred data, the range of output values from such an LMT is limited to the dynamic range and color gamut of the transform used to create the empirical LMT"
And:
Quote
Furthermore, empirical LMTs should certainly not be “baked in” to ACES data because that would destroy potentially useful dynamic range and color information contained in the original ACES-encoded imagery.

So, he continues, the right step while using ACES is not to apply "empirical LUT X" (a normal LUT converted to ACES), but to create a "Analytic LMT", that is already made for ACES range.



I have no know-how to do this, but one can probably do a lot of money if he manages to convert these Vision3 emulation LUTs to real Analytic LMTs. Just a tip.  ;D

Also, note:

Quote
CLF does not support math formulas, so CTL used for even simple shaper functions would need to be sampled to LUTs for implementation in CLF. This is potentially limiting, but extending CLF, or adding support for algorithmic description of LMTs, is under consideration for upcoming ACES enhancements and extensions.
And Nick Shaw (ACES Mentor) agrees:
Quote
This is a very important point. Since in CLF shaper functions currently need to be implemented as 1D LUTs, and operations like hue modifiers as 3D LUTs, that does not fit the goal of making analytic LMTs which are not limited to a particular range

So, the correct would be to construct  a LMT from scratch using math formulas inside it as an "emulation" for something like Vision3. That way, it would not be limited as the "Empirical LUT X".

Pages: 1 [2] 3 4 ... 7