Although most of it is based by trained models, the words Network and Deep are widely used when the model needs to be trained first, before you can use it, right ?
Yes, these are machine learning algorithms, it needs to be trained using a dataset. Normally the author provides a pretrained model, so you only need to download and run it.
I already stumbled on "Learning to See in the Dark" , isn't that the model that is trained by a dataset of 50Gb of photos...and needs at least 64Gb of ram in a computer to be even able to train the model.
Yes. And as I've said above, the author doesn't provide information about how to create your own dataset (for this specific network you need to create your own, because it uses Raw noise information and that varies between different sensors).
I'm very interested in denosing and superresolution and such.
But i'd like to be able to run it on a late 2012 imac 
Won't be possible. Most of these research use PyTorch/TensorFlow and require CUDA...
I'm still very impressed with this script:
http://www.magiclantern.fm/forum/index.php?topic=20999.50
This seems to be burst image denoise (multiple images), right? The networks above are made for single image denoise/super-res...
Also, if you have the time to take multiple photos, why not make long exposures with low ISO and blend with HDRMerge? That is what I do whenever I can.
So if someone thinks there's a better script, which I could test, without the need of model training, I'm all ears 
My suggestions:
- For upscaling single images (not video), try
ESRGAN. It works great, but you will need CUDA (you can run directly on CPU, but it takes hours to precess).
- For denoising, try
FFDNet or
BM3D.
ps: btw, never tested FFDNet/BM3D. I suggested them because they demonstrate good results in paper and are pretty fast. I've only tested ESRGAN at the time I'm writing this.