Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Luther

#26
Quote from: adrjork on June 02, 2020, 12:34:20 AM
My cam is a 5D3, and build should be Nightly.2017Feb12.5D3123
You're using a very old build. Try the lastest @Danne's build.
#27
Quote from: motionSOUL on May 29, 2020, 11:30:45 AM
Do someone has succeeded in using Cinelog-C from MLV App?
Cinelog-C is proprietary. MLVApp can't legally put their DCP in there...

Quote from: adrjork on May 30, 2020, 02:08:27 AM
Hi everyone, my dual-iso clips have lines and flickering also after the conversion (MLV to DNG)... ???
Which camera and build are you using? Also, if possible, provide a MLV sample...
#28
Quote from: adrjork on May 26, 2020, 03:05:49 AM
Unfortunately, I haven't enough free space to convert all the clips to DNGs.
See: https://www.magiclantern.fm/forum/index.php?topic=13152.0
Quoteonce I'll know which are the clips I actually need, I'll convert them few MLV-to-DNG.
MLVApp is great for previewing. Only exporting is painfully slow sometimes.
QuoteDavinci needs a recent OSX, but Mojave can't work well with Nvidia, right?
As far as I know, that's not true. But you could use some Linux distro like Debian. It has Nvidia drivers and Davinci works on it (faster than on Windows/OSX). Not surprising that Pixar uses Debian on their render farm...
QuoteSo I went for a couple of Radeon VII GPUs
I like AMD too. I have a AMD CPU. AMD is more cooperative with open source community than Nvidia/Intel. But unfortunately CUDA is way ahead of OpenCL, and that is a Nvidia specific feature. For heavy processing like you're doing, Nvidia is the only solution that is cost-effective.
QuoteHow can I re-format my external storage without deleting my mlvs inside??? Is it possible???
This is called "in-place filesystem conversion". I don't think it is possible to do that from HFS+ to OpenZFS though. You can read (not write) HFS+ on Windows using this tool (it seems... never tested). On Linux you can read and write on HFS+ filesystems.
Best solution would be to get new HDDs (highly suggest WesternDigital instead of Seagate), then copying and erasing the old HDDs. This way you ensure everything is in its right place, with a fresh filesystem. ZFS is great for large amounts of data. You can also consider doing RAID mirroring.
#29
QuoteHow will that work with mlv files?
QuoteIt seems that Transkoder doesn't handle MLV codec.
Same way people work with MLV on Resolve: transcoding to CDNG. Transkoder can take full advantage of GPU using CUDA. Might be expensive, but I would seriously consider it if I needed to process multiple terabytes of data.
QuoteI formatted my mlv-storage with HFS+
Consider using ZFS instead. This is what most guys processing large amounts of data use. Windows and OSX has support to it (don't know how well these projects work, since ZFS was built for linux/bsd primarily).
QuoteI've sold my Nvidia cards
The older nvidia architectures are very cheap nowadays. I've brought my nvidia 1050 for about $120. I think that's a fair price for an era where people pay $400 for apple wheels...
#30
@adrjork you have a shitton of data. MLVApp is not really suited for that kind of thing (CPU-only processing). I'd say it's time for you to invest in something like Transkoder.
#31
Raw Video / Re: DragonFrame with 5d markII
May 25, 2020, 04:37:54 PM
No, only HDMI output has higher resolution. USB output is limited, AFAIK.
#32
Quote from: adrjork on May 24, 2020, 05:21:48 AM
Which profile do you recommend (to be graded after in Davinci)?
You're better off just converting to CDNG and processing directly on Resolve using ACES.
If you don't want to do that, I'd go with Alexa Log-C.
#33
Unfortunately 600D is not very good at handling Raw footage (MLV) @alexmansur92, because it cannot write to the card fast enough. I use a 600D for years, it is still great to use ML on her. My pieces of advice are:
- Use a different picture style. There's many out there, the ones that gave me the best results were VisionTech and this one.
- Use 24fps with 180 degree shutter speed (double your fps == 1/48) if you want that "cinematic look" people talk about. ML has a feature called "FPS Override", to make sure you get 24.000fps. You can also fine tune your shutter speed to get 1/48.
- Get a 18% grey card and make a custom white balance before you shoot on different places.
- Disable sharpness using "zero shapness" feature
- I personally use these two next, but they might be placebo: set your ISO to -0.3 gain, to you get multiples of 80 (160, 320, 640, etc). This supposedly decreases noise. Increase bitrate using CBR factor 3.0x. Don't use ISO above 1250, it will just increase noise.
- Expose to the right. Meaning, overexpose the video a bit (I normally do +1-1.5 f-stop above). Activate Zebras and/or histogram in Global Draw menu.
- Get old lenses. I always recommend Helios 44-2 or Takumar 50mm f/1.4. I used these tiny lenses for many years (almost a decade). They never failed, they are very cheap and gives great results. You can also put some filters in it to get better results (circular-polarizer, ND filter to get 1/48 shutter speed at sunlight and BlackProMist)
- For post-processing: NeatVideo does a good job at removing noise. Samurai (by Digital Anarchy) does a good job at increasing sharpness. For color grading, you will have to learn by yourself, as everyone has a different approach to that. An easy way of getting "ok" results is using ImpulZ 3DLUTs.

Don't be afraid of installing magic lantern. I've installed it literally more than 50 times over the last decade and it still works as if it was new.
If you need more tips, search the forum for topics of your interest. There are lot of old threads with loads of information.
If you can buy another camera, get something that records MLV (Raw video). That will be a *very big* difference on your final results. The cheapest ones with good quality are 50D and EOS M.
#34
I just use Reinhardt with AP1 matrix. Increase saturation and play with curves. This is the fastest way of processing.
I used to use Log-C, but skin midtones gets trashed for some reason.
#35
Very nice. It was interesting to see how that 50mm looks beautiful at 03min03s, in comparison with the harshness of 03min02s. Not that the 24-105 is bad, it's just that it is way too sharp for 'cinematic' footages, IMO. Would be interesting to test some difusion ZEEk, like digicon or black pro mist. A circular polarizer could also have helped on some scenes.
That black halo on blown highlights (like on 01:15) is a side effect of aliasing? Also, it seems your LUT or processing oversaturated the skin at 01:48.
Great one. Liked the music choice too, particularly 02min37s.
#36
The software a1ex posted before, raspiraw, added initial support for this camera's sensor:
https://github.com/6by9/raspiraw
https://github.com/6by9/raspiraw/commit/dbe1acf64cba221787080ad06f79d3a5bccd171a

Technically, it would be possible to record raw 24fps. RPi 4 has USB 3.0 ports, you could attach a SDD using an adapter.
They could add support for MLV format in rapiraw using Ilia's lib, so we could process using MLVApp... just an idea :P
#37
Adobe is so shitty that they call "new" a feature that open source community got working 3 years ago on their own software (Voukoder). They keep adding more complexity, without fixing their bugs, crashes and slowness.
It's very unfortunate that I still have to rely on their software.
#38
Quote from: cmh on May 16, 2020, 08:23:53 PM
it would probably take something like 6 hours per seconds of videos with my gtx 1060 6gb for getting a really good quality upscaling (10 iterations or so), maybe more.
That's pretty slow. Still, these super-resolution research are pretty useful sometimes. As an anecdote, I once had a neighbour who was robed and he got the guy on a very low resolution CCTV. I tried to upscale with lanczos/spline64 but it wasn't enough to identify the individuals. Maybe with MMSR it would be possible.
QuoteIt's unfortunate I don't have another box hanging around.
TPU pricing seems to be decreasing fast (~$15 per hour). Would be nice to test in how much time it would crunch a 1min video.
Quoteedit: I love those AI upscaling posts btw.

Check this out:



Also:
https://github.com/cchen156/Seeing-Motion-in-the-Dark
https://github.com/elliottwu/DeepHDR
https://github.com/thangvubk/FEQE
#39
Quote from: garry23 on May 15, 2020, 04:16:37 PM
I thought some may be interested in this post https://www.strollswithmydog.com/open-raspberry-pi-high-quality-camera-raw/
That's nuts. Such a small camera, getting these results.
#40
They have pretrained models:
https://github.com/open-mmlab/mmsr/wiki/Training-and-Testing#training

I've tried their previous work, ESRGAN. It is very impressive. Takes a while to process without CUDA (couldn't figure out how to make CUDA work on Windows and didn't have any linux distro installed at the time).
QuoteI should have used debian stable instead of fedora 32
Yes, definetly. apt is great for testing stuff. Most people in CV are using Ubuntu because of nvidia drivers, though.
#41
That's interesting. Thanks for sharing Frayfray!
#42
Hey folks, this is not related to ML, but wanted to invite you to help the comma10k project. It's a dataset to help training OpenPilot, the open source autonomous driving software:


#43
That's so awesome @Frayfray ! Very good quality. This would be very useful for autonomous driving projects or drone projects (maybe port it to smaccmpilot).
#45
I record the audio with a Tascam and manually sync after. It is a pain, indeed. But the final quality if very nice. For long shots I use 600D instead and the 50D stays just for small inserts.
#46
Nice job. I really liked the music/cinematography at 4min16s, well done.
#47
Very nice @reddeercity! I might test it next week and see how it goes. Amazing how 50D still rocks so much, after all those years.
#48
Camera-specific Development / Re: Canon R5
May 04, 2020, 08:00:47 PM
This camera can potentially kill C300 and C500.
#49
Quote from: a1ex on May 04, 2020, 03:37:30 PM
The H7 is, indeed, limited to 640x480. The H7plus has OV5640, which is capable of 2592x1944 at 15 FPS. I don't expect this board to process such resolutions in real-time, at this frame rate, but working on cropped areas (regions of interest) might be doable.

Both have the option of a 640x480 global shutter module, or a FLIR Lepton module (thermal camera).
I see, I mixed up the Plus with the normal one. That's a very impressive resolution for such a small/cheap camera.
Quote
Very nice! Isn't that the self-driving car startup of George Hotz?
Yes, exactly. The project is all open source, couldn't believe it when I first saw. And it works very well most of the time.
#50
Quote
I wonder how it compares with something like OpenMV H7plus (2592x1944 15fps, 1080p30, 720p60, 3.6x2.7mm active area, M12 mount) for real-time machine vision applications (visual servoing).
I think that's the resolution for still photography, no? Video capture seems to be 640x480px (at 60 FPS) (couldn't find in the official documentation - didn't try hard to be fair). It has Raw output though:
https://www.youtube.com/watch?v=8FVoSF34zNM

Quote from: a1ex on May 04, 2020, 08:10:16 AM
And you are no longer stuck with the default optics; you can use any other C/CS-mount lenses (heck, there were EOS M users adapting such lenses for filming), or - with adapters - pretty much any DSLR lenses.
Yeah, that's something awesome about these cameras. You could put a 12mm f/1.8 for less noise in autonomous driving applications, for example.

For anyone interested, we are building a 10K image dataset for autonomous driving that will be available for free to everybody and will be used in OpenPilot/Comma.ai:
https://github.com/commaai/comma10k