2015-01-16, 07:59 AM
Okay, I have no idea what you're talking about or how your method works.
This is what I would do...
The largest common factor between the two images is 96, of course, so I can could count the PAL as being separated into 96 sections of 720x6, and the NTSC as being separated into 96 sections of 720x5. In theory that would apply across the whole image, so I no longer need to consider my sources as being 720x576 and 720x480, but merely as being 720x6 and 720x5. In the upscale I'm going to want to double the resolution of one of the sources, probably chosen by whichever of the two looks better to begin with. This is because a doubling means I can still retain more quality from the "better" source, let's say those are the even lines (for argument's sake). I'll then want to take the other format source and upscale it to the same amount - so either 720x960 or 720x1152. I'll then use crop and interleave to leave myself with 720x12 or 720x10, and then split fields to 720x6 and 720x5, from which I'd use a specially designed mask to blend together the odd fields (weighted towards the one that provides more detail), and then put it all back together. From there I'd run a clean-up filter like QTGMC to remove any artefacts (i.e. aliasing). Without actually writing a script and running it, I'd estimate the increase in vertical resolution to be around 20% (depending on the quality of the sources), and mathematically speaking it seems like the optimal way to extract the "extra information" from the secondary SD source.
This is what I would do...
The largest common factor between the two images is 96, of course, so I can could count the PAL as being separated into 96 sections of 720x6, and the NTSC as being separated into 96 sections of 720x5. In theory that would apply across the whole image, so I no longer need to consider my sources as being 720x576 and 720x480, but merely as being 720x6 and 720x5. In the upscale I'm going to want to double the resolution of one of the sources, probably chosen by whichever of the two looks better to begin with. This is because a doubling means I can still retain more quality from the "better" source, let's say those are the even lines (for argument's sake). I'll then want to take the other format source and upscale it to the same amount - so either 720x960 or 720x1152. I'll then use crop and interleave to leave myself with 720x12 or 720x10, and then split fields to 720x6 and 720x5, from which I'd use a specially designed mask to blend together the odd fields (weighted towards the one that provides more detail), and then put it all back together. From there I'd run a clean-up filter like QTGMC to remove any artefacts (i.e. aliasing). Without actually writing a script and running it, I'd estimate the increase in vertical resolution to be around 20% (depending on the quality of the sources), and mathematically speaking it seems like the optimal way to extract the "extra information" from the secondary SD source.