Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
See https://www.magiclantern.fm/forum/index.php?topic=25440.msg232207;topicseen#msg232207 Gamma shift issues on Mac / NCLC tags

Is it possible for correct metadata colour tags to be added to ProRes output from MLVApp? In ProRes outputs the tags are missing.

Untagged files are at the mercy of being displayed un-colour managed by apps and OS.
22
@Abdul - no.
23
OpenColorIO is very far away from how we do our processing, so we found no easy solution how to add it. But feel free to create pull requests, if you get a good soultion for it.
LUTs in MLV are processed near the end of the pipeline:
https://github.com/ilia3101/MLV-App/wiki#processing-a-frame
24
So exciting that this topic has been resurrected from the grave.

So if I understand correctly, any 'sata to IDE' converter that provides UDMA 7 and 150Mb/s read/write speeds while acting as a device not a host, should work with the right 'dumb' pin conversion adapter?

So what about an adapter like this:
https://www.amazon.com.au/Ableconn-IIDE-MSAT-2-5-Inch-Converter-Aluminum/dp/B017VQT5YW/ref=pd_lpo_23_t_2/358-1747184-0009040?_encoding=UTF8&pd_rd_i=B017VQT5YW&pd_rd_r=c92e0606-6922-40bf-8691-b3989882b6db&pd_rd_w=eTcEi&pd_rd_wg=wEvUL&pf_rd_p=ad2d1e6e-bc60-4795-b4c0-2dbd35f6678d&pf_rd_r=Z3ZX2WGHKTW3VHYPWNJ4&psc=1&refRID=Z3ZX2WGHKTW3VHYPWNJ4

Or even this:
https://www.aliexpress.com/i/32948126404.html

If you're having trouble getting power into the drive simultaneous to it being initialised by the camera, you could use this adapter:
https://www.amazon.com.au/Bi-Directional-Adapter-JM20330-Chipset-Dongle/dp/B07M8S8MCM

And here's a ready to go fake CF card that takes Cfast cards
https://www.amazon.com/UREACH-CFast-CF-adapter-2-PACK/dp/B016PA1I3G

25
General Chat / Re: Looks interesting
« Last post by garry23 on Yesterday at 07:53:05 AM »
One thing to think about is the resolution you can get using the two approaches.

That is driving the lens from inside, ie using the ML or Canon code vs rotating the lens externally via an add on device.

Just a thought.
26
Reverse Engineering / Re: Reverse Engineering Picture Styles
« Last post by aceflibble on Yesterday at 12:05:18 AM »
Still, I don´t really get the 10bit reference, or source, since all that was done creating the pic style is done with the 8 point(or 10, can´t remember), curve tool in the pic style editor. All in eight bit. Unfortunately no "blackbox magic" in the pic style.
But that's the thing, we all assumed that for years but it turns out it wasn't created that way. They could not have possibly put in these values in PSE. It's literally not possible for that software to do it. So either Canon clued them in to NakedPSE all those years back (unlikely given it has apparently only ever existed in Japanese and if it was being offered around you'd think we'd have heard of it sooner) or Technicolor had some other method not using any version of PSE in order to set those values. So our previous assumption that they just knocked something together in PSE is wrong, they had something else going on back then and I'd love to track down one of those developers and find out what they used.
But however they set the values that way still leaves me wondering why they used the 10-bit values, when they could have created effectively the same curve shape using the 8-bit values every other profile uses, and of course it doesn't explain at all how it is the Canon systems can recognise and action those 10-bit values when they're supposed to be working with 8-bit values and only applying them to 8-bit files. The only thing I can think of is the slim possibility that the cameras actually do work in at least 10-bit (if not higher) before the gamma curve is applied. This tallies with ML being able to get 10-bit video out of older cameras (albeit usually with a crop or strict time limit) but the assumption there has always been that that was some kind of extra-tough hacking wizardry that ML was creating and Canon had no clue about... but these profiles and the cameras supporting 10-bit values suggests Canon was working towards this themselves, but for some reason never completed the functionality. I mean, I just can't think of any other reason why Canon would make sure their system could work with 10-bit values on a post-capture processing ruleset unless they actually thought they might include some kind of 10-bit (or higher) recording. (Outside of raw photos, which of course aren't affected by these curves.)

As I said earlier, the whole thing is mind-bogglingly strange.

I really like the Cinetech picture style for my shooting style from Visioncolor for shooting 8 bit .h264. More dynamic range compared to neutral (prolost settings). Maybe a little bit more noise but you can clean this up in post.
I'm afraid as we've pulled apart the files, there isn't anything which can capture more dynamic range than the stock Neutral profile already does with its true bare-bones matrix. What you're seeing in those boosted styles is not more dynamic range but simply a reallocation of the range, typically brightening shadows up (hence why you're seeing more noise) at the expense of cramming the mid tones further together. There's actually a good example of this type of 'expansion' much earlier in the thread. If you look on page 5 you'll find Danne trying to work out a more linear DCP profile and thinking they may have seen more dynamic range (but to their credit, acknowledging they may have been imaging it) and dfort following up a few replies further down pointing out that after careful inspection there wasn't more range, it's just that the peak brightness was a little lower so it made the same detail look slightly better-defined. What you're getting with your noisier Cinetech profile is the same thing at the opposite end of the scale, as I said before; the same range, just gaining definition in one area at the cost of definition in another.

Canon's own plain ol' Neutral is the matrix that renders the widest range and the most detail. Not necessarily the most clearly-defined detail and range, but the most in a total sense. You simply can not get purer than that 1/1/1/0/0 matrix.

But, as I said as part of my much longer comment above, if what you're using is working for you and getting you the results you like then don't worry about whether or not some other method is or isn't technically better, or easier, or more popular, or whatever. If the final image you get is how you wanted it to be and you like the workflow then just keep doing what you're doing. 8)

Before anyone suggests it, I've already tried using 0.9 and 0.5 matrices and then boosting saturation back up later, and they don't improve anything. Dropping the initial strength of each channel does improve clarity in mid tones but takes it away from shadows and highlights, so again it's just a trade-off. There's no actual improvement on the simplest matrix, unsurprisingly.
If you want to shoot in black & white then I have found a totally even 0.3/0.59/0.11/0/0 matrix does seem to give more mid tone nuance at no cost to shadows or highlights compared to the stock monochrome profile, but personally I like my black & whites to be a bit more punchy than true-to-life.
27
Okay I get dependencies and I get why this is difficult to implement, so I won't bother anyone of you ever again with that stuff. But since I'm trying to get it working and produce some 3D LUTs as ODTs for example from AP0 or AP1 to Rec.709, sRGB or DCI-P3 I want to ask if I should post these LUTs in here just in case they are of use to other people who don't want to rely on other software than MLVApp.
Oh but when in the image processing are LUTs processed in MLVApp?
28
General Chat / Re: Looks interesting
« Last post by theBilalFakhouri on October 27, 2020, 11:25:40 PM »
Some days ago I shared this video on Discord server:

a1ex replies were exciting:

Quote
"should be doable without external motors, and with a less expensive sensor
https://www.magiclantern.fm/forum/index.php?topic=23739.0
https://www.magiclantern.fm/forum/index.php?topic=4997.0
you still need a way to get the distance sensor data into ML (possible via USB for example, but there may be other ways)"

Original messages on Discord:
https://discord.com/channels/671072748985909258/756922555532837024/766227505056186368

Lidar contacting directly to lens motors via USB connection using ML, that maybe would work with STM lens (with loop focus ring).
29
General Chat / Re: Looks interesting
« Last post by garry23 on October 27, 2020, 11:03:21 PM »
Missed that post  :) ;) :D ;D >:( :( :o 8) ??? ::) :P :-[
Pages: 1 2 [3] 4 5 ... 10