The A.I Up-Scaling for 1x3 and 5x3 Binning Mods

Started by theBilalFakhouri, July 05, 2019, 02:13:04 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

theBilalFakhouri

1x1  Resolution: 5208x2214
1x1" border="0

1x3 MLVApp Resolution: 1736x2214 to 5208x2214
1x3-A-I" border="0

1x3 Topaz Gigapixel A.I Resolution: 1736x2214 to 5208x6642 in Topaz Gigapixel then over-sampled to 5208x2214 in After Effects
1x3" border="0

These Images are cropped to specific area at 1280x544 , Download 5K Full-Images Here and do your pixel peeping :D

Can 1x3 be better than 1x1 ?
Oh yes! with little artifacts , this is only one example, maybe (or maybe not) in some cases it will be not that good but still a lot better than normal 1x3 stretching.

The problems:
It takes a lot of time! at least on my normal Laptop Nvidia 840m and i5-4210U , 12GB of RAM , Topaz Gigapixel A.I uses GPU or CPU, if your system better than mine maybe it will be x50 faster , for me it took about 15 to 20 Minutes , I can upscale by Width so I just entered 5208 value to get my desired resolution but it also will up-scale the height automatically; this make the process longer and there is no Anamorphic de-squeeze e.g. I can't up-scale 1736x2214 to 5208x2214 directly I should up-scale it first to 5208x6642 then stretch the height down , you don't have to up-scale the width to x3 like what I did e.g. 1736 to 5208 , instead you can go to 3840 and squeeze the height to correct aspect ratio depending of what want.

Also it's a better method for de-squeezing 5x3 than normal stretching , more examples are coming.
I started with this video to understand somethings:
https://www.youtube.com/watch?v=Tr8eGOXPv7I&t

You can up-scale videos by converting it to image sequence first and load the images , It's better to up-scale before stretching 1x3 or 5x3 footage, so set the value from .33 to 1.0 or 1.67 to 1.0 in MLVApp then export it as sequence .

I think this method will be better for videos only if it was real-time process or nearly real-time , not sure how it will perform (take time) on high end systems.

What about up-scaling 1x1 5208x2214 to 10K ?
Of course it will be better than 1x3 , try it and the point is to get better 1x3 videos , maybe real 5K or 4.5K  :D

I wanted to make some video examples but I can't do it on my system , if you have a good system show us please your clips :D

Jip-Hop

Cool stuff, looks good! Maybe this tool could be used instead?
https://github.com/IBM/MAX-Image-Resolution-Enhancer
Those results look pretty good to me too and since it's open source we can modify it ^^

ilia3101

Quote from: Jip-Hop on July 05, 2019, 03:19:04 PM
Cool stuff, looks good! Maybe this tool could be used instead?
https://github.com/IBM/MAX-Image-Resolution-Enhancer
Those results look pretty good to me too and since it's open source we can modify it ^^

Nice link! it was actually really easy to run, but it could not handle images bigger than around 500x300 on my 8GB computer. To use it for raw video we would have to send images in chunks to it, then blend them together. Would be quite slow.

Test:


It increased the dimensions by 4X, but I'd say the resolution increase is only worth about 1.5X - looks a bit sharper but with too many fake edges and weird artifacts.

masc

Quote from: Jip-Hop on July 05, 2019, 03:19:04 PM
Cool stuff, looks good! Maybe this tool could be used instead?
https://github.com/IBM/MAX-Image-Resolution-Enhancer
Those results look pretty good to me too and since it's open source we can modify it ^^
Looks promizing... but is there a Python specialist? Maybe someone could translate the Python code to C/C++ ;)
5D3.113 | EOSM.202

theBilalFakhouri

Talking about python translating , these gonna be good :D

Quote from: Luther on March 26, 2019, 12:06:14 PM
:(

The processing would take hours without CUDA support. Yeah, not really feasible. But neural networks open a new world of possibilities for image processing. First time I read this stuff I couldn't believe it was true. Down below there's some impressive debluring algorithms [1][2][3][4], moire reduction [5] and super-resolution [6][7].

If someone here is looking to upgrade the demosaicing in MLVApp, I think it would be best to look what the high-end industry is using right now.
I'm not an expert, but most people recording with Panavision DXL2/mini or ALEXA LF seem to use Colorfront Transkoder or Codex Production Suite. ARRI recommends their own algorithm, called "ADA-5", that can be tested using their freeware software (sample footage from ALEXA here). This ADA-5 is now a standard called "RDD 31:2014" (couldn't access the paper myself).



[1] http://openaccess.thecvf.com/content_ECCV_2018/papers/Jiangxin_Dong_Learning_Data_Terms_ECCV_2018_paper.pdf
[2] https://github.com/jacquelinelala/GFN
[3] https://github.com/jiangsutx/SRN-Deblur
[4] https://arxiv.org/abs/1808.00605
[5] https://arxiv.org/abs/1805.02996
[6] https://github.com/jiangsutx/SPMC_VideoSR
[7] https://github.com/xinntao/ESRGAN

ilia3101

Currently it has to run in a Docker container, which is a virtual machine as far as I know, so first we must figure out that part and how to remove it (I have no clue why this shit is used) - it also depends on tensorflow, not sure how to do that in a efficient way.

Seems like https://github.com/IBM/MAX-Image-Resolution-Enhancer/blob/master/core/model.py and https://github.com/IBM/MAX-Image-Resolution-Enhancer/blob/master/core/SRGAN/model.py are the most main files.

To use it as it is, you run a command telling it the location of a image file, we could just try that method for simplicity.

masc

When searching through github, there are also some C++ solutions for super resolution. But all need opencv... I would love to not add further libraries... :P ...and no idea how slow it gets with this...
5D3.113 | EOSM.202

ilia3101

Oh yeah that post from Luther was very useful thanks for bringing that up again.

masc

@Ilia... do you like to do some test? This looks good - just C++ implementation which works without library!
https://github.com/rageworx/libsrcnn
Realizes this:
http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html

Edit: after compiling I get a one function library (and a second config function):
typedef enum DLL_PUBLIC
{
    SRCNNF_Nearest = 0,
    SRCNNF_Bilinear,
    SRCNNF_Bicubic,
    SRCNNF_Lanczos3,
    SRCNNF_Bspline
}SRCNNFilterType;

void DLL_PUBLIC ConfigureFilterSRCNN( SRCNNFilterType ftype );
int  DLL_PUBLIC ProcessSRCNN( const unsigned char* refbuff,
                              unsigned w, unsigned h, unsigned d,
                              float muliply,
                              unsigned char* &outbuff,
                              unsigned &outbuffsz,
                              unsigned char** convbuff,
                              unsigned* convbuffsz);
5D3.113 | EOSM.202

ilia3101

That looks pretty good, confused about why it has those filter options though.

masc

Could be easily tested on single PNG export in MLVApp... maybe I'll give it a try. In test.cpp you can see how to use it... just cooking with water. ;)
5D3.113 | EOSM.202

Jip-Hop

I'd be curious to see how these solutions compare to DaVinci Resolve Super Scale: https://www.premiumbeat.com/blog/resolve-15-super-scale-feature/

Never use it because it's slow (so that's at least one thing they have in common). But might be nice for the anamorphic de-squeeze and would skip exporting to an image sequence first.

ilia3101

@masc4ii please do try on the png export.

@Jip-Hop could you do a test frame, and upload both original and super scaled. (I do not have davinci)

masc

Quote from: masc on July 05, 2019, 06:11:13 PM
@Ilia... do you like to do some test? This looks good - just C++ implementation which works without library!
https://github.com/rageworx/libsrcnn
Realizes this:
http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html

I tried it: worked out of the box. Yay. It uses openMP and needs around one minute for a 1856x1044 picture... not fast but faster than I thought (for such a piece of code what doesn't use GPU). Configuration was "SRCNNF_Nearest".

Standard export:


Resized (2x) export:
5D3.113 | EOSM.202

Jip-Hop

I also just have the free version of Resolve. Don't know if it has Super Scale in free version actually, but if it doesn't it will at least show the results watermarked. Would love to do a test but I'm away from the camera and computer this weekend. But I will be lurking in the forum.

theBilalFakhouri

Quote from: Ilia3101 on July 05, 2019, 07:59:59 PM
@Jip-Hop could you do a test frame, and upload both original and super scaled. (I do not have davinci)

Here is your big moire frame up-scaled x4 using davinci super scale .

theBilalFakhouri

Great result @masc!

Can't wait to see this in MLVApp :D

masc

It was just a test... I now tried 4x scaling with parameter "SRCNNF_Lanczos3"... needs 6min per image (same as above) on my Core2Duo :P and looks not really sharp.
5D3.113 | EOSM.202

Jip-Hop

Quote from: theBilalFakhouri on July 05, 2019, 08:16:41 PM
Here is your big moire frame up-scaled x4 using davinci super scale .

From the tests so far I think the results in the OP look best. theBilalFakhouri could you repeat the test of your anamorphic frame with DaVinci Super Scale? Would love to see how it compares to Topaz Gigapixel A.I.

ilia3101

Quote from: theBilalFakhouri on July 05, 2019, 08:16:41 PM
Here is your big moire frame up-scaled x4 using davinci super scale .

Thanks! Looks like it's taking a softer approach than the IBM one.

@masc4ii imgbb says bandwidth exceeded whatever that means (can't see upscaled image)

Quote from: Jip-Hop on July 05, 2019, 08:36:14 PM
From the tests so far I think the results in the OP look best. theBilalFakhouri could you repeat the test of your anamorphic frame with DaVinci Super Scale? Would love to see how it compares to Topaz Gigapixel A.I.

This could also be because I used a quite moire'd up frame as an example. We should compare all methods on same pic.