Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Luther

#1
TL;DR: use MLV (raw recording) if you need better quality. You can get continuous recording at about 1800px using a faster card with SD "overclock". Even if it's not 1080p, it will be much better than the h264 recording.
For making videos already recorded better, there's not much you can do, but I guess you could try Topaz VEAI.
#2
General Help Q&A / Re: RAW file extension
February 22, 2021, 04:07:57 PM
It can be directly imported on some software, like MLVApp. Most of the other programs won't recognize MLV format, so you have to convert it. You can do that on MLVApp too. For DaVinci Resolve, you can convert the MLV to a DNG sequence and then import on it. For Premiere Pro and Final Cut, I suggest you use MLVApp and convert to ProRes or DNxHR.
#3
General Chat / Re: Anamorphic lenses
February 22, 2021, 04:04:11 PM
Anamorphic is amazing, but not for the "small fish". Cheap anamorphic lenses are complicated. I had one and I rarely used it. The reason is that, on most of them, you need to focus both lenses (anamorphic and spherical) and you cannot focus too near (you'd need a diopter filter). I selled mine to get a f/1.2, a decising which I do not regret.
You can still get the "anamorphic look" with a spherical lens. Just print a piece of wood with the oval shape and put it on an old filter frame. That way you get the beautiful anamorphic distortion, without sacrificing sharpness or practicality. Now, of course it won't be the same as a panavision/cooke anamorphic lens, but that's at least something.
#4
General Chat / Re: The future for ML
February 22, 2021, 03:55:24 PM
A 11min video to explain a feature that could have been explained in 30 seconds...
But yeah, I get your point garry. I just don't see how ML can get better, it's kinda hardware-limited by now.
While we are at it, I've been testing frame interpolation using RIFE and I immediately thought about how this could benefit low-framerate/high-resolution MLVs. For example, recording at 12fps 4k and post-processing using RIFE to get 24fps.
Another idea: people are using canon cameras to build datasets to train machine learning. Magic Lantern could help to annotate this dataset, adding metadata that could be manually selected on-the-fly. For example: you could input 10 keywords. After each photograph is taken, ML asks what keyword to pick and automatically writes to the metadata that can then be parsed on the computer.
I have also been testing with video super-resolution, but there's not much future for amateurs because networks like EDVR/TecoGAN requires too much VRAM. Only ESRGAN is possible at the moment, but it has no temporal consistency so it's not really usable for video (although the results on photographs are amazing). There's also denoising, where DPIR works very well.
So, indeed, machine learning software can really help with the hardware limitations that (old) canon cameras have.
#5
The point of shooting Raw, for me, is not even the "organic look". To be honest, digital noise is not pleasant as film grain so I always remove it a little. The point for me is the flexibility. If a scene gets overexposed, I can recover it. And the color grading is much easier and gives better results. So the footage is more "forgiving".

Once somebody asked me to edit a footage from a Sony camera and I simply couldn't do the color grading, because multiple colors were out of gamut (it was a show with many bright lights). That kind of thing just doesn't happen on Raw image.

Now the real question is if ML Raw is still worth it, because more cameras are popping now with native Raw recording... perhaps that's why this forum has been so silent these last months.

ps: when people ask me why I like ML Raw, I sometimes show this amazing work from Marius and they get stunned - https://www.magiclantern.fm/forum/index.php?topic=24858.msg225386
#6
Great video, as always. Two things I've noticed:
- The highlights are over saturated (particularly on your face). Using a cuve rolloff would help it a bit.
- 30MB/s bitrate is too low for that resolution... youtube itself recomends 50MB/s. Personally I just use "crf" (constant rate factor) of value 19. It always gives near perceptually lossless output, without being as heavy as dnxhr/prores.
#7
General Chat / Re: Canon DSLR line is dead
July 23, 2020, 11:01:18 PM
Quote from: names_are_hard on July 21, 2020, 10:59:00 PM
Wifi is pretty useful:
- can upload images automatically as they're taken
- no need to access the card door should your camera be in some weird rig (e.g., waterproof casing)
- direct printing
- allows remote control of camera from hundreds of metres away
The majority of people don't need those features. So the least they should do is to offer a version without wifi, IMO.
Some of those features seems like overengineering to me, like direct printing. Why can't you just use the computer to do that? It is really that difficult to type for 20 seconds instead of putting a wifi card inside the camera and paying $100 more for it? I don't think so.
That doesn't mean wifi is completely useless. Some of those are legit user cases.

Quote from: yourboylloyd on July 22, 2020, 03:12:38 AM
Imagine a world, where all you had to do is bring a camera and a laptop. Instead of your camera writing to an SD card, your camera transfers 12K RAW footage straight to your laptop's harddrive where your laptop automatically starts to converts the footage into proxys or h.266 files or whatever.

That is why we need wifi on a camera.
This is a delusion. There's no wifi that can transmit at such high speeds and even if there was it would be way too expensive, power hungry and big to put in a camera. And, again, why can't you just plug the card on the computer an copy? What's the problem with that?
Idk, I think some modern tech are pretty dumb.

Quote from: Satis on July 23, 2020, 11:14:06 AM
While for functionality I agree with you, having WIFI and wireless transmitters open up a whole new can of worms on cameras:
Namely surveillance tracking and potential information leaks. If you read Snowden's autobiography you know that these threats are very real. (wifi triangulation, hardware backdoors etc.)
So all in all no, I rather not have wifi and keep my photos and location for myself. :) Or, even better have a physical switch for it, so  you can do remote shooting if needed.
Exactly. Imagine photojournalists covering some political issue and the intel from some country erasing all the information.
#8
General Chat / Re: Canon DSLR line is dead
July 21, 2020, 10:43:11 PM
I like mirrorless. It reduces mechanical complexity, manufacturing prices and weight. While I disagree with other 'modern tech' on cameras (like wifi, who the hell needs wifi on camera?), mirrorless ain't one of them.
#9
Very good @masc! Stabilization is good and the music is nice. Some scenes are a bit overexposed and I think a little of sharpness on mlvapp would make some details pop out more, but other than that it's flawless.
Just a question: why did you upscale to 8K?
Man, I need to get a 5d3 too.
#10
I'm quite impressed by this R5 camera. And after working many years on this area you don't get easily impressed...
Does anyone has a 'open gate' sample footage (full res raw output)? If it wasn't so expensive, that camera would completely kill like 1/3 of the cinema camera industry.
#11
This is great @natschil !
I've been looking to get into film again, but the price of quality scanners is pretty high, such as these ones from Plustek OpticFilm.

Quote from: natschil on June 20, 2020, 10:51:42 AM
What is slowing things down quite a bit is that setting the exposure time isn't instantaneous. A large chunk of time is spent waiting just to be sure that the exposure being set has been set.
Have you tried to do multiple exposures using advanced bracketing and merge them later using HDRMerge or similar software? That way you woudn't need to measure exposure, just do 2 f-stops above and bellow and let the software decide the optimal exposure for each area of the image...
What are you using to precess the image? Rawtherapee added this very nice feature to invert film scans recently: http://rawpedia.rawtherapee.com/Film_Negative
#12
Quote from: a1ex on June 14, 2020, 11:34:27 PM
Guess: 2...4 threads might help (not 100% sure).
An alternative would be to use Aria2. It has options to optimize threads to reach maximum speeds. Example ~/.aria2/aria2.conf:

min-split-size=1M
max-connection-per-server=16
max-concurrent-downloads=16
optimize-concurrent-downloads=true


I don't know if hg clone already does this, but would be nice to have SHA256 for each split...
#13
So I 'cleaned the room' (reinstalled lastest QtCreator, MinGW and cloned Master again) and it worked. Don't know what was the issue before, if you guys updated Qt and I was using a wrong version of it (it was working about 3 months ago) or if the new commit from @Ilia3101 fixed it...
Anyway, master is working great on Windows 10 64-bits now, that's good.
#14
Quote from: masc on June 11, 2020, 09:03:45 AM
Edit: master compiles and works without problem also on Windows 32bit, while 64 bit produces a crash on startup)
So you were able to fully compile on 64-bit? What are your settings? I think the float commit was what made it stuck. It seems other people also had issues compiling tinyexpr on windows:
https://github.com/codeplea/tinyexpr/issues/44
https://github.com/codeplea/tinyexpr/pull/54
Quote
@Luther: if you just install Qt with mingw32 or mingw64 you don't have to change anything in settings for beeing able to compile MLVApp. MLVApp project is made for working with QtCreator standard settings on Windows.
That was the first thing I tried yesterday. Didn't work on master now (some months back it worked without problems). The debugger accuses of clang not being installed. I know QtCreator has it's own clang binary, but for some reason it didn't work. After manually installing and setting it to use the new binary, it worked. But then float.h was not up-to-date.
Quote
Where is the problem to open a project and hit a compile button. Sorry.
If only it was that simple. QtCreator was +50GB when I first downloaded it. All of that just for what was supposed to be just a GUI frontend for compilers.
Quote
Even typing "make" is more difficult.
Not really? I spent ~1h trying to figure out how to compile MLVApp master. While compiling st is as easy as doing "git clone https://git.suckless.org/st && cd st && sudo make".
Quote
And you can do that in command line instead using QtCreator, if you like, and you will come to the same result.
Yes, but now you need to chain 5+ binaries in a row to do the same task people have been doing since the 80s.
Anyway, don't want to be the obnoxious purist here, just think some of those modern solutions are too complex and create more problems than it solves.
#15
Pretty nice quality!
#16
Master might be broken on Windows. I've tried multiple configs and still wasn't able to compile. Some notes for reference in the future:
- Fist error was not detecting clang. Just install using chocolatey.
- You have to change the default compiler. Go to Tools > Options > Kits. Then click on the one you're using and choose the C and C++ compiler.
- Chocolatey doesn't seem to create PATH Env to clang binaries. You have to add manually.
- Second error: float.h is not up-to-date in MinGW binary release (it seems). I replaced it using this version and it worked. The path to the original is something like "C:\Qt\Tools\mingw730_64\x86_64-w64-mingw32\include"
- Third error: the debugger failed. Don't know if it was specific to my machine. I tried directly linking lldb instead on the 'Kit'. The path to it is "C:\Program Files\LLVM\bin\lldb.exe"
- Some people pointed to use the mkspec called "win32-clang-g++". Just update on the 'Kit'.
- After all that, still multiple errors while compiling. Before I got angry and gave up, I tried to read some of the debug messages. Tried to change from clang.exe to clang-cl.exe (which seems to have better compatibility for some reason). No success. The issues seemed to be related to the linker, because way too many libs were not recognized. So I tried to find to find a way to use LLVMs own linker (lld), but I couldn't find a way to make qtcreator use it (or cmake, I dunno).


Windows is such a piece of shit for developing. QtCreator is bloated and buggy. Such a nightmare. I miss when software used to just be easy to compile as writing "sudo make".
#17
Quote from: Danne on June 07, 2020, 08:50:42 AM
But how to use this streamlined matching resolve?
Yeah, not sure. An alternative solution would be to use Log-C with Wide Gamut RGB. Then apply ARRI LUT on Resolve.

Quote from: reddeercity on June 07, 2020, 09:33:12 AM
No can't agree , instead of using rec709 or AP1 use Adobe standard space and correct your output color space to 16-235 level instead of full range (0-255)
Why would you limit your range? Bt601 is not used anymore, there's no possible scenario (that I can think off) where your workflow would benefit from it.
QuoteFYI: I like my image to be flat (log like) as I don't final grade with mlv app.
If you don't grade in MLVApp and you don't need to do color match, then your goal should be to retain as much information as possible. Using Log-C with AP1 and exporting lossless would be the way to go. Or just convert to CDNG and work directly on Resolve, that would give much better/faster results.
#18
Quote from: DanneMaybe modifying rec709 itself could get us closer to adobe color?
Using AP1 instead of Rec.709 seems to improve colors in my tests. AdobeRGB creates artifacts in blue hues.

Quote from: reddeercity on June 07, 2020, 07:56:11 AM
On the 5d2 & 50D "camera matrix" does not do a good job , messes with colors .
I disagree with that. At least for 50D, the skin tones improve a lot with camera matrix.
#19
Very nice images. Better than anything else I've seem on news channels. Not just aesthetically pleasing (EOS M is killing it, this seems straight out of some dystopian movie), but you also got the 'atmosphere' from the protesters (news channels often cuts only the convenient bits).
#20
+1 for Rokinon/Samyang.
If I had the money I would get the 24 mm f/1.4 with a Viltrox speedbooster.
#21
Quote from: Levas on June 03, 2020, 04:58:06 PM
Although most of it is based by trained models, the words Network and Deep are widely used when the model needs to be trained first, before you can use it, right ?
Yes, these are machine learning algorithms, it needs to be trained using a dataset. Normally the author provides a pretrained model, so you only need to download and run it.

Quote
I already stumbled on  "Learning to See in the Dark" , isn't that the model that is trained by a dataset of 50Gb of photos...and needs at least 64Gb of ram in a computer to be even able to train the model.
Yes. And as I've said above, the author doesn't provide information about how to create your own dataset (for this specific network you need to create your own, because it uses Raw noise information and that varies between different sensors).

Quote
I'm very interested in denosing and superresolution and such.
But i'd like to be able to run it on a late 2012 imac  :P
Won't be possible. Most of these research use PyTorch/TensorFlow and require CUDA...

Quote
I'm still very impressed with this script:
http://www.magiclantern.fm/forum/index.php?topic=20999.50
This seems to be burst image denoise (multiple images), right? The networks above are made for single image denoise/super-res...
Also, if you have the time to take multiple photos, why not make long exposures with low ISO and blend with HDRMerge? That is what I do whenever I can.

Quote
So if someone thinks there's a better script, which I could test, without the need of model training, I'm all ears  8)
My suggestions:
- For upscaling single images (not video), try ESRGAN. It works great, but you will need CUDA (you can run directly on CPU, but it takes hours to precess).
- For denoising, try FFDNet or BM3D.

ps: btw, never tested FFDNet/BM3D. I suggested them because they demonstrate good results in paper and are pretty fast. I've only tested ESRGAN at the time I'm writing this.
#23
Quote from: Levas on June 02, 2020, 08:12:59 PM
The USRNet one has this py file "network_usrnet.py" in the models directory.
Does this mean this can be run on our own images, or is it just a paper ?

Yes @Levas, all of them can be used in our own images. The code for the USRNet is still under update, it seems, because the author submitted it to CVPR 2020 (the biggest and most respected computer vision conference - got postponed because of COVID this year). But you'd have to train the network yourself, as they didn't provide a pre-trained model yet.
For the ELD denoise, their model is similar to the paper "Learning to See in the Dark" and uses Raw image dataset. You'd need to train the network specifically for the camera you're using because noise changes from each different sensor (in the paper they used a Sony camera). The issue is that this dataset needs to be pre-processed and the authors didn't provide any feedback on how exactly to do that.
Alternatives: use MMSR for superres (the code and training from the OP is provided) and KAIR  for denoise (FFDNet).
#24
Use Rawtherapee. On the demosaicing menu, choose "bayer". If it is a raw image, you will see the bayer pattern (mozaic of RGGB). This is the easiest way.
But for the images to be 200kb, someone probably made lossy compression on those files.