
This makes sense and is done all the time as there always something or someone to crop out of a scene.īut what is discussed on this and other forums to me makes absolutely no sense, you're taking content that was already edited, graded, denoised, cropped and mastered, converted into a an end-user friendly deliverable format, often times reencoded again by end-users so they can share it via p2p or streaming site and then you guys try and make a silk purse out of a sows ear by trying to get back to the original master.Īnd somehow this is seen as a worthwhile endeavor.Īs for the original question, you can't enhance what's not there, you can only enhance what is there. I can't find the page, but Black Magic has a page where they talk about using their Ursa 4.6k G2, shooting in 4.6k, cropping to 3.7k during editing and then slightly upscaling to 4k for the deliverable. The only time it makes sense is when you are working with the original lossless or near-lossless footage and after processing you need to upscale the master in order to have it in a standard distributable resolution. More importantly, and I'm sure that many people will disagree with me, I would think the best results would be to upscale first then do all the restoration that you what would you think about upscaling with Topaz Video Enhance AI after all workflow 1? That would enhance some details but would not make everything look plastic probably? Personally I am a not big fan of upscaling under most circumstances. You can't make a valid Topaz vs nnedi3_rpow2 comparison under these conditions.

This is not a valid comparison, workflow 1 and workflow 2 do not follow the same order, in W1 you upscale after you change the color matrix but before sharpening yet in W2 the upscale is the last thing you do. Ufo_sII2a_spot_amtv_2_cut_v19_upscale_fullhd_reduced-muxed.mp4 The starting video capture is not that good, and the bitrate for workflow 1 is lower, but you can compare yourself For the attached sample I used Gaia High Quality modelĪttached the results. DAR parameter (4:3) has been added to the video input of Topaz with ffmpeg, otherwise Topaz will not respect width proportion The h264 compression done inside Topaz is out of control, you can only choose "Compression Factor" (bitrate allocation, I used middle setting) It appears to me that some (additional) desoining is happening in Topaz The sharpening in the AviSynth step of workflow 2 should probably be slightly reduced to compensate sharpening occurring in Topaz I only used progressive videos I ignore Topaz deinterlacing procedures, but I doubt it can be any better than QTGMC in AviSYnth

avisynth output of previous steps is the input for Topaz Restoration with AviSynth (as per workflow 1) compression with h264 (just to compare and upload the sample) levels and colors correction before filtering
