Large image, low fps vs. Small image, high fps

Started by dfort, June 04, 2015, 10:18:22 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

dfort

Should some of us consider changing from shooting 1920x1080 24p to 1280x720 60p? (That is of course if you can't shoot anything larger in 60p.)

The other day I saw a presentation by Randy Ubillos. He's the guy who coded the original versions of Adobe Premiere and went on to Apple to write Final Cut Pro and FCP-X. He recently retired from Apple and is giving talks about traveling and shooting movies for fun. He advocates shooting at 60fps.

This got me thinking. I come from a film background which has been stuck in a 24fps playback world. (Ok, now we got higher frame rate playback but many viewers hate it because the experience is like watching live television.) In any case, what I'm wondering about is the practicality of shooting 60p and finishing in 24p.

One second of 1920x1080 24p video consists of 49,766,400 pixels while one second of 1280x720 60p video will flash 55,296,000 pixels at the viewer. Ok, I know that technically video runs at 23.98 but the DCP's and theatrical projectors I have worked with were in a studio that set everything for a 24fps workflow so I'll stick with that.

I'm using these numbers because these are the limits of my Canon camera running ML but stay with me because this also applies to 4k video at 24p vs. HD video at 60p. And it gets even more interesting with higher frame rates and smaller image sizes like the iPhone 6's 568x320 240fps mode.

So, starting with 1280x720 60p camera original material first upscale it to 1920x1080 using a method that interpolates new pixels instead of doing a simple upscale then change the frame rate using a method that involves a form of image stacking to increase the resolution. This should result in a video that is fairly close in quality with video shot in 1920x1080 24p with the advantage of having more control over shutter angle, motion blur and the option to conform to slow motion, albeit at a lower resolution. In addition it should work even better when the camera or subject is moving as opposed to shooting locked down on a tripod for reasons better explained in this article:

http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/

Note that image stacking is nothing new, it has been used in astrophotography to reduce noise and with macro photography to increase depth of field and resolution.

Also note that while manufactures are pushing for higher resolution there have been many features shot in standard resolution video that have stood the test of time so we shouldn't fear shooting on smaller image sizes. Over 10 years ago I worked on "A Day Without A Mexican" which was shot on Panasonic DVX100 cameras set to 24p (with the so-called advanced pulldown) and blown up to 35mm. We actually had several comments over which scenes were shot on video and what was shot on film--it was all shot on video.

Besides image stacking there are other ways to make a better upscale. Adobe has a rather new Detail-preserving Upscale effect and here is a link to a paper on "Super-Resolution from a Single Image" -- thanks to a1ex for the link:

http://www.wisdom.weizmann.ac.il/~vision/single_image_SR/files/single_image_SR.pdf

Disclaimer--I don't have any formal technical training nor did I absorb all that much of the conversations that I had with engineers over my past 20+ years working as an assistant editor/editor/DI conformist, etc. so my musings might be completely impractical.

Levas

Reading the article shows that you need about 6 to 8 frames to combine for about 4 times the resolution. So 60fps in 1280x720 could give you 2560x1440 resolution, but only at 10fps...
So unless you can shoot in 6x24fps(about 150fps), this is probably useless.
I'll stick with full HD in 25fps ;)

dfort

I did some experimenting and it looks like there is something here that might be interesting. Recording at 60fps and stacking 10 images means that the effective shutter speed would be 1/60th sec. Recording at 24fps and assuming a 180 degree shutter it would be about 1/50th sec. Of course this will give you all sorts of artifacts to deal with but it just might be an interesting effect. The process would be to take the first frame, stack the following 10 frames to make your first 24p frame. From there it should be pretty much like the old 3:2 pulldown from the standard def. days. In other words, frames 1 through 10 makes the first frame, frames 3 through 12 makes the second frame, frames 5 through 14 the third and so on.

The first experiment with 1280x720 60p I didn't use cropped movie mode so I could compare it side by side with 1920x1080 24p but the aliasing was terrible. However, Movie crop mode looks very promising. [Edit: Doh! Movie crop mode only works with 1920x1080.]

Of course shooting 1280x720 24p raw looks very good too [Edit: Movie crop mode does work in this case-H.264]. The post processing while complicated might still be easier than image stacking a movie file.

I'll be busy on other things the next couple of weeks but I'll keep thinking about this.

dfort

Here is a test shot on an EOS-M.



1280 x 720 60P H.264 movie file.
Lots of aliasing, moriƩ and general loss of resolution.




Shot on 1280 x 720 60P H.264, exported as TIFF's, 10 frames stacked and resized to 1920x1080 in Photoshop.
Big improvement in image quality but note the motion blur in the flag.




Shot on 1920 x 1080 24P H.264 for comparison -- still the best quality.

The point isn't to shoot smaller images using higher frame sizes for better images but for flexibility in post production without losing too much quality. Think of this as Dual ISO but for speed ramping and shutter angle control in post. Well, sort of.


dfort

Ok, one more post before I go AFK for a while. (Hopefully someone is reading these posts and doesn't completely write off this idea.)

If the improvement in quality using image stacking is so good with 1280x720 60P video, how about 1920x1080 60P to 4K? I don't know of any Canon camera that can shot 1920x1080 60P but that might be possible with FPS override on some cameras? My original idea was to shoot 60P but maybe it can also be effective with 30P and 24P video? One issue might be the motion artifacts due to the low effective shutter speed but it might end up being a cool effect. Sort of the opposite of the high shutter speed staccato "Saving Private Ryan" effect.

Using the 1920x1080 24P H.264 video from my lowly EOS-M here's what's possible.


upscaled to 3840x2160 using image stacking.
Once again the most obvious artifact is the motion in the flag.



And here are details with and without image stacking.

Note that when stacking these images the base layer image is at 100% opacity, the next layer at 50%, third at 33%, etc. There are other methods of image stacking but this one seems to work quite well and it should be possible to write a script that will automate the process.

Note the loss of image at the edges due to the layers alignment process in Photoshop. It would be best if the base layer is anchored and all other layers aligned to the base layer. Sure, you will lose some detail at the very edges but it should be minimal. There will also be a limit to the number of images that can be stacked when the camera is moving but there will be a general loss of resolution due to the movement anyway.

Any comments? Has anyone tried this? (I usually find out that my "original" ideas aren't mine or original.)

mothaibaphoto

I use median stacking with timelapses when there is not enough motion. But I didn't upscaled the image. You gave me idea how to get UHD timelapse not only with FRSP. My 5D mkIII can shoot 3520x1320x12 FPS continuously. I upscaled such a shot with "Detail preserving upscale" in AE to 2160 height and exported as TIFF sequence( This is the step I never did before). Than, I splitted sequence into subfolders according to desireable number of images to stack(Python script). And stack with median (Photoshop javascript). The resulting sequence looks like shot in UHD from start to my taste.

Python script:

# for Statics_Folder.jsx
import glob, os, shutil

folders = 0
infolder = "D:/temp1/"

def split_list(a_list):
    step = int(len(a_list) / folders + (1 if len(a_list) % folders else 0))
    return [a_list[i*step:(i+1)*step] for i in range(folders)]

ImagesToStack = int(input("Number images to stack:  "))
ff = glob.glob(infolder+"*.tif")
ff.sort()
total = len(ff)
#folders = int(total / 10 + (1 if total % 10 else 0))
folders = int(total / ImagesToStack + (1 if total % ImagesToStack else 0))
#print(str(folders))
newlist = split_list(ff)
for  folder in range(folders):
    os.makedirs(infolder+str(folder))
    for EveryFile in newlist[folder]:
        #print(str(EveryFile))
        fileName = infolder+str(folder) +"/" + os.path.basename(EveryFile)
        #print(fileName)
        shutil.move(EveryFile, fileName)


Photoshop javascript:

    // let statistics we are running it from another script
    var runStatisticsFromScript = true;
    var ScriptFolderPath = app.path + "/"+ localize("$$$/ScriptingSupport/InstalledScripts=Presets/Scripts");
    // load statistics into memory
    $.evalFile(ScriptFolderPath+ '/' + "Statistics.jsx");
    // set the stack mode
    imageStats.selectedOperationStr = localize("$$$/AdobePlugin/statistics/Median=Median");
    imageStats.selectedOperation = 'medn';
    // prompt for top folder
    var topFolder = Folder.selectDialog ('prompt');
    // make sure user didn't cancel
    if(topFolder!=null){
        // create saveFolder
        var saveFolder = new Folder(topFolder+'/Stacked_Images');
        if(!saveFolder.exists) saveFolder.create();
        // put the top folder in an array
        var folderArray = Array(topFolder);
        // add the subFolders to the array
        getSubFolders( topFolder );
        // loop the folders
        for(var folderIndex=0;folderIndex<folderArray.length;folderIndex++){
            var fileList = folderArray[folderIndex].getFiles(/\.tif/);
            // make sure there is at least two files to stack
            if(fileList.length > 1){
                imageStats.computeStatistics(fileList, false);
                var saveName = folderArray[folderIndex].name+'.psd';
                // should check for unique name - this will overwrite if subfolders have the same names
                var saveFile = new File(saveFolder+'/'+saveName);
                SaveAsPSD( saveFile, true, true);
                app.activeDocument.close(SaveOptions.DONOTSAVECHANGES);
            }
        }
    }
    function getSubFolders( folder ){
        var files = folder.getFiles();
        for(var f=0;f<files.length;f++){
            if(files[f] instanceof Folder){
                folderArray.push(files[f]);
                getSubFolders( files[f] );
            }
        }
    };
    function SaveAsPSD( inFileName, inMaximizeCompatibility, inEmbedICC ) {
        var psdSaveOptions = new PhotoshopSaveOptions();
        psdSaveOptions.embedColorProfile = inEmbedICC;
        psdSaveOptions.maximizeCompatibility = inMaximizeCompatibility;
        // how to flatten?
        app.activeDocument.flatten()
        app.activeDocument.saveAs( File( inFileName ), psdSaveOptions );
    };

dfort

Wow, you're actually doing this. I aligned the layers in Photoshop when I did my examples but if you're doing timelapse on a tripod that shouldn't be necessary. It would be great if you can post some examples.

mothaibaphoto

I finally get it online:
http://www.shutterstock.com/video/clip-12543578-stock-footage-hilltop-view-on-the-mist-rolling-over-rainforest-at-sunrise-timelapse-panorama.html
The preview looks awful :( I think, the transcoder simply can't handle smooth transitions of such a detailed image. This shot can be used to test video hardware - just like "Ducks.Take.Off".
At least this trick works...

dfort

Very nice. Did you do that pan in post? Just wondering because of the image stacking process on a panning shot would limit the number of images that can be aligned.

Quote from: mothaibaphoto on November 10, 2015, 06:30:44 AM
This shot can be used to test video hardware - just like "Ducks.Take.Off".

Ducks.Take.Off ??

In any case, yes it does work very well indeed.


dfort

I see--I think--what you mean about the video transcoding issues.

I also suspected that you made the pan in post though it works great on that shot because there is no parallax to give it away.

For moving or handheld shots I was thinking of having a "smart" image alignment process that reduces the number of images in the stack if the alignment becomes too severe--as would happen in a panning shot. Sure, the resolution would drop during camera moves depending on how quickly the camera moves but that happens naturally anyway. By the way some slight camera movement, especially sub-pixel movements, actually increases the resolution according to the article I referenced in my original post.

QuoteThe superresolution method here relies on statistics. We'll gather a high quality dataset by shooting a collection of about 20 consecutive sharp images. The real trick is that we'll shoot this set of exposures completely hand held. The subtle motion of our hand will actually act just like a sensor shift mechanism and allow different pixels to capture different parts of the scenes. It sounds simple but it actually works.

dfort

This article about a Google paper on superresolution came up on my news feed this morning so I thought it was worth necroposting. It will be presented at SIGGRAPH 2019.

https://sites.google.com/view/handheld-super-res



Although this paper is centered around still photography it can also apply to video. (See my previous post.)

a1ex

The above was done by people who know what they are doing.

This was done by people who don't :P

Definitely worth reading.

QuoteSpecifically, the algorithm is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode (whether zooming or not) on Google's flagship phone.