Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Luther

Pages: [1] 2 3 ... 13
General Chat / Re: Canon DSLR line is dead
« on: July 23, 2020, 11:01:18 PM »
Wifi is pretty useful:
 - can upload images automatically as they're taken
 - no need to access the card door should your camera be in some weird rig (e.g., waterproof casing)
 - direct printing
 - allows remote control of camera from hundreds of metres away
The majority of people don't need those features. So the least they should do is to offer a version without wifi, IMO.
Some of those features seems like overengineering to me, like direct printing. Why can't you just use the computer to do that? It is really that difficult to type for 20 seconds instead of putting a wifi card inside the camera and paying $100 more for it? I don't think so.
That doesn't mean wifi is completely useless. Some of those are legit user cases.

Imagine a world, where all you had to do is bring a camera and a laptop. Instead of your camera writing to an SD card, your camera transfers 12K RAW footage straight to your laptop's harddrive where your laptop automatically starts to converts the footage into proxys or h.266 files or whatever.

That is why we need wifi on a camera.
This is a delusion. There's no wifi that can transmit at such high speeds and even if there was it would be way too expensive, power hungry and big to put in a camera. And, again, why can't you just plug the card on the computer an copy? What's the problem with that?
Idk, I think some modern tech are pretty dumb.

While for functionality I agree with you, having WIFI and wireless transmitters open up a whole new can of worms on cameras:
Namely surveillance tracking and potential information leaks. If you read Snowden's autobiography you know that these threats are very real. (wifi triangulation, hardware backdoors etc.)
So all in all no, I rather not have wifi and keep my photos and location for myself. :) Or, even better have a physical switch for it, so  you can do remote shooting if needed.
Exactly. Imagine photojournalists covering some political issue and the intel from some country erasing all the information.

General Chat / Re: Canon DSLR line is dead
« on: July 21, 2020, 10:43:11 PM »
I like mirrorless. It reduces mechanical complexity, manufacturing prices and weight. While I disagree with other 'modern tech' on cameras (like wifi, who the hell needs wifi on camera?), mirrorless ain't one of them.

Share Your Videos / Re: 5.7K anamorphic 5D Mark III footage
« on: July 21, 2020, 10:39:02 PM »
Very good @masc! Stabilization is good and the music is nice. Some scenes are a bit overexposed and I think a little of sharpness on mlvapp would make some details pop out more, but other than that it's flawless.
Just a question: why did you upscale to 8K?
Man, I need to get a 5d3 too.

General Chat / Re: Help with understanding. 10bit H.265 vs 10bit RAW?
« on: July 12, 2020, 05:37:36 AM »
I'm quite impressed by this R5 camera. And after working many years on this area you don't get easily impressed...
Does anyone has a 'open gate' sample footage (full res raw output)? If it wasn't so expensive, that camera would completely kill like 1/3 of the cinema camera industry.

General Chat / Re: Digitalising slide-film with magic lantern
« on: June 21, 2020, 02:35:24 AM »
This is great @natschil !
I've been looking to get into film again, but the price of quality scanners is pretty high, such as these ones from Plustek OpticFilm.

What is slowing things down quite a bit is that setting the exposure time isn't instantaneous. A large chunk of time is spent waiting just to be sure that the exposure being set has been set.
Have you tried to do multiple exposures using advanced bracketing and merge them later using HDRMerge or similar software? That way you woudn't need to measure exposure, just do 2 f-stops above and bellow and let the software decide the optimal exposure for each area of the image...
What are you using to precess the image? Rawtherapee added this very nice feature to invert film scans recently:

Guess: 2...4 threads might help (not 100% sure).
An alternative would be to use Aria2. It has options to optimize threads to reach maximum speeds. Example ~/.aria2/aria2.conf:
Code: [Select]

I don't know if hg clone already does this, but would be nice to have SHA256 for each split...

So I 'cleaned the room' (reinstalled lastest QtCreator, MinGW and cloned Master again) and it worked. Don't know what was the issue before, if you guys updated Qt and I was using a wrong version of it (it was working about 3 months ago) or if the new commit from @Ilia3101 fixed it...
Anyway, master is working great on Windows 10 64-bits now, that's good.

Edit: master compiles and works without problem also on Windows 32bit, while 64 bit produces a crash on startup)
So you were able to fully compile on 64-bit? What are your settings? I think the float commit was what made it stuck. It seems other people also had issues compiling tinyexpr on windows:
@Luther: if you just install Qt with mingw32 or mingw64 you don't have to change anything in settings for beeing able to compile MLVApp. MLVApp project is made for working with QtCreator standard settings on Windows.
That was the first thing I tried yesterday. Didn't work on master now (some months back it worked without problems). The debugger accuses of clang not being installed. I know QtCreator has it's own clang binary, but for some reason it didn't work. After manually installing and setting it to use the new binary, it worked. But then float.h was not up-to-date.
Where is the problem to open a project and hit a compile button. Sorry.
If only it was that simple. QtCreator was +50GB when I first downloaded it. All of that just for what was supposed to be just a GUI frontend for compilers.
Even typing "make" is more difficult.
Not really? I spent ~1h trying to figure out how to compile MLVApp master. While compiling st is as easy as doing "git clone && cd st && sudo make".
And you can do that in command line instead using QtCreator, if you like, and you will come to the same result.
Yes, but now you need to chain 5+ binaries in a row to do the same task people have been doing since the 80s.
Anyway, don't want to be the obnoxious purist here, just think some of those modern solutions are too complex and create more problems than it solves.

Share Your Videos / Re: Canon EOS M, 2.5k raw with 15-45mm ef-m lens
« on: June 11, 2020, 03:38:20 AM »
Pretty nice quality!

Master might be broken on Windows. I've tried multiple configs and still wasn't able to compile. Some notes for reference in the future:
- Fist error was not detecting clang. Just install using chocolatey.
- You have to change the default compiler. Go to Tools > Options > Kits. Then click on the one you're using and choose the C and C++ compiler.
- Chocolatey doesn't seem to create PATH Env to clang binaries. You have to add manually.
- Second error: float.h is not up-to-date in MinGW binary release (it seems). I replaced it using this version and it worked. The path to the original is something like "C:\Qt\Tools\mingw730_64\x86_64-w64-mingw32\include"
- Third error: the debugger failed. Don't know if it was specific to my machine. I tried directly linking lldb instead on the 'Kit'. The path to it is "C:\Program Files\LLVM\bin\lldb.exe"
- Some people pointed to use the mkspec called "win32-clang-g++". Just update on the 'Kit'.
- After all that, still multiple errors while compiling. Before I got angry and gave up, I tried to read some of the debug messages. Tried to change from clang.exe to clang-cl.exe (which seems to have better compatibility for some reason). No success. The issues seemed to be related to the linker, because way too many libs were not recognized. So I tried to find to find a way to use LLVMs own linker (lld), but I couldn't find a way to make qtcreator use it (or cmake, I dunno).

Windows is such a piece of shit for developing. QtCreator is bloated and buggy. Such a nightmare. I miss when software used to just be easy to compile as writing "sudo make".

But how to use this streamlined matching resolve?
Yeah, not sure. An alternative solution would be to use Log-C with Wide Gamut RGB. Then apply ARRI LUT on Resolve.

No can't agree , instead of using rec709 or AP1 use Adobe standard space and correct your output color space to 16-235 level instead of full range (0-255)
Why would you limit your range? Bt601 is not used anymore, there's no possible scenario (that I can think off) where your workflow would benefit from it.
FYI: I like my image to be flat (log like) as I don't final grade with mlv app.
If you don't grade in MLVApp and you don't need to do color match, then your goal should be to retain as much information as possible. Using Log-C with AP1 and exporting lossless would be the way to go. Or just convert to CDNG and work directly on Resolve, that would give much better/faster results.

Quote from: Danne
Maybe modifying rec709 itself could get us closer to adobe color?
Using AP1 instead of Rec.709 seems to improve colors in my tests. AdobeRGB creates artifacts in blue hues.

On the 5d2 & 50D "camera matrix" does not do a good job , messes with colors .
I disagree with that. At least for 50D, the skin tones improve a lot with camera matrix.

Share Your Videos / Re: doc style protest footage (shot on eos m)
« on: June 04, 2020, 11:30:25 AM »
Very nice images. Better than anything else I've seem on news channels. Not just aesthetically pleasing (EOS M is killing it, this seems straight out of some dystopian movie), but you also got the 'atmosphere' from the protesters (news channels often cuts only the convenient bits).

Hardware and Accessories / Re: Advice on a wide lens for the EOS-M
« on: June 03, 2020, 05:51:38 PM »
+1 for Rokinon/Samyang.
If I had the money I would get the 24 mm f/1.4 with a Viltrox speedbooster.

General Chat / Re: Insane video super-resolution research
« on: June 03, 2020, 05:27:13 PM »
Although most of it is based by trained models, the words Network and Deep are widely used when the model needs to be trained first, before you can use it, right ?
Yes, these are machine learning algorithms, it needs to be trained using a dataset. Normally the author provides a pretrained model, so you only need to download and run it.

I already stumbled on  "Learning to See in the Dark" , isn't that the model that is trained by a dataset of 50Gb of photos...and needs at least 64Gb of ram in a computer to be even able to train the model.
Yes. And as I've said above, the author doesn't provide information about how to create your own dataset (for this specific network you need to create your own, because it uses Raw noise information and that varies between different sensors).

I'm very interested in denosing and superresolution and such.
But i'd like to be able to run it on a late 2012 imac  :P
Won't be possible. Most of these research use PyTorch/TensorFlow and require CUDA...

I'm still very impressed with this script:
This seems to be burst image denoise (multiple images), right? The networks above are made for single image denoise/super-res...
Also, if you have the time to take multiple photos, why not make long exposures with low ISO and blend with HDRMerge? That is what I do whenever I can.

So if someone thinks there's a better script, which I could test, without the need of model training, I'm all ears  8)
My suggestions:
- For upscaling single images (not video), try ESRGAN. It works great, but you will need CUDA (you can run directly on CPU, but it takes hours to precess).
- For denoising, try FFDNet or BM3D.

ps: btw, never tested FFDNet/BM3D. I suggested them because they demonstrate good results in paper and are pretty fast. I've only tested ESRGAN at the time I'm writing this.

General Chat / Re: Insane video super-resolution research
« on: June 03, 2020, 04:37:07 AM »
The USRNet one has this py file "" in the models directory.
Does this mean this can be run on our own images, or is it just a paper ?

Yes @Levas, all of them can be used in our own images. The code for the USRNet is still under update, it seems, because the author submitted it to CVPR 2020 (the biggest and most respected computer vision conference - got postponed because of COVID this year). But you'd have to train the network yourself, as they didn't provide a pre-trained model yet.
For the ELD denoise, their model is similar to the paper "Learning to See in the Dark" and uses Raw image dataset. You'd need to train the network specifically for the camera you're using because noise changes from each different sensor (in the paper they used a Sony camera). The issue is that this dataset needs to be pre-processed and the authors didn't provide any feedback on how exactly to do that.
Alternatives: use MMSR for superres (the code and training from the OP is provided) and KAIR  for denoise (FFDNet).

Raw Video / Re: How to check if a .DNG is really raw data
« on: June 03, 2020, 04:24:45 AM »
Use Rawtherapee. On the demosaicing menu, choose "bayer". If it is a raw image, you will see the bayer pattern (mozaic of RGGB). This is the easiest way.
But for the images to be 200kb, someone probably made lossy compression on those files.

My cam is a 5D3, and build should be Nightly.2017Feb12.5D3123
You're using a very old build. Try the lastest @Danne's build.

Do someone has succeeded in using Cinelog-C from MLV App?
Cinelog-C is proprietary. MLVApp can't legally put their DCP in there...

Hi everyone, my dual-iso clips have lines and flickering also after the conversion (MLV to DNG)... ???
Which camera and build are you using? Also, if possible, provide a MLV sample...

Unfortunately, I haven't enough free space to convert all the clips to DNGs.
once I'll know which are the clips I actually need, I'll convert them few MLV-to-DNG.
MLVApp is great for previewing. Only exporting is painfully slow sometimes.
Davinci needs a recent OSX, but Mojave can't work well with Nvidia, right?
As far as I know, that's not true. But you could use some Linux distro like Debian. It has Nvidia drivers and Davinci works on it (faster than on Windows/OSX). Not surprising that Pixar uses Debian on their render farm...
So I went for a couple of Radeon VII GPUs
I like AMD too. I have a AMD CPU. AMD is more cooperative with open source community than Nvidia/Intel. But unfortunately CUDA is way ahead of OpenCL, and that is a Nvidia specific feature. For heavy processing like you're doing, Nvidia is the only solution that is cost-effective.
How can I re-format my external storage without deleting my mlvs inside??? Is it possible???
This is called "in-place filesystem conversion". I don't think it is possible to do that from HFS+ to OpenZFS though. You can read (not write) HFS+ on Windows using this tool (it seems... never tested). On Linux you can read and write on HFS+ filesystems.
Best solution would be to get new HDDs (highly suggest WesternDigital instead of Seagate), then copying and erasing the old HDDs. This way you ensure everything is in its right place, with a fresh filesystem. ZFS is great for large amounts of data. You can also consider doing RAID mirroring.

How will that work with mlv files?
It seems that Transkoder doesn't handle MLV codec.
Same way people work with MLV on Resolve: transcoding to CDNG. Transkoder can take full advantage of GPU using CUDA. Might be expensive, but I would seriously consider it if I needed to process multiple terabytes of data.
I formatted my mlv-storage with HFS+
Consider using ZFS instead. This is what most guys processing large amounts of data use. Windows and OSX has support to it (don't know how well these projects work, since ZFS was built for linux/bsd primarily).
I've sold my Nvidia cards
The older nvidia architectures are very cheap nowadays. I've brought my nvidia 1050 for about $120. I think that's a fair price for an era where people pay $400 for apple wheels...

@adrjork you have a shitton of data. MLVApp is not really suited for that kind of thing (CPU-only processing). I'd say it's time for you to invest in something like Transkoder.

Raw Video / Re: DragonFrame with 5d markII
« on: May 25, 2020, 04:37:54 PM »
No, only HDMI output has higher resolution. USB output is limited, AFAIK.

Pages: [1] 2 3 ... 13