by mrsebastian on 12/12/12, 6:40 PM with 30 comments
by ChuckMcM on 12/12/12, 7:36 PM
Upscaling has a lot of research behind it. Creating a 'vectorized' version of an image (see their paper here: http://eprints.gla.ac.uk/47879/1/ID47879.pdf) certainly helps avoid dumb scaling issues. But when you blow it up 7x (like the example) it looks like a cartoon or a bad paint by numbers painting. The issue is that you lose information about the picture when you capture it, and without that information you have to 'make up' what you think would have been there had you been able to capture it. People get a lot of mileage out of the fact that for the most part real world imagery has a lot of linear transitions but there are limits to that as well.
I think the work is interesting but as they say in their paper: "What we have shown already is that there is a viable continuous image format and that it can be used for some conventional operations which are not handled easily in sampled formats." which I agree with, but the pixel is not yet in danger of extinction :-)
by crazygringo on 12/12/12, 9:04 PM
If JPG and video codecs already store information in terms of wavelets, frequencies, or whatever have you --
Are there any programs that will render JPG or videos at a higher-than-native resolution?
I know you're not gaining any extra signal. But if you have a 320x240 video, and you want to play it full-screen at 1280x960 -- instead of rendering at 320x240 and then naively upscaling 4x, would some kind of direct codec rendering to 1280x960 possibly produce more accurate results?
If the codec/format is doing a good job at extracting "meaning" from the original image for its compression, it seems like it ought to.
by corysama on 12/12/12, 9:45 PM
It's a Sample. There are many ways to filter a collection of samples to reconstruct an image. Drawing bigger squares with nearest-neighbor sampling is only one of them. Depending on the context of your application, you might even go as far as the linked article: http://www.extremetech.com/gaming/84040-depixelizing-pixel-a...
by kemiller on 12/12/12, 7:53 PM
by randallu on 12/12/12, 8:14 PM
Ultimately however vectors are far more expensive to rasterize than bitmaps, especially if you want antialiasing along edges. There was a bunch of noise a while back about NVIDIA's path rendering extension -- they do antialiasing via 16x multisampling (!!!) NOT a mobile friendly approach.
by bryanlarsen on 12/12/12, 7:55 PM
From the article, the source data is in pixel format. No codec format can be better than the source data. And the output is displayed on a screen with pixels. Pixels in, pixels out. The intermediate format can only make things worse, not better. It's not surprising that they're able to do better than current formats: it sounds like they're using a huge amount of CPU for encoding. You could get better compression if you created a new MPEG4 profile that required more CPU too.
The intermediate format may be tweaked to provide better results while scaling, but you could probably add scaling hints to more conventional codecs, too. Not sure what the point would be either unless the source data had a lot more pixels than the output format...
by jgeralnik on 12/12/12, 7:28 PM
Ultimately, though, I think it will take a lot more than a new codec to kill the pixel.
by jws on 12/12/12, 7:27 PM
PDF of authors' previous work on still images: http://eprints.gla.ac.uk/47879/1/ID47879.pdf
Still too long?: "Mona Lisa in 50 Polygons[1]" Grows Up.
␄
[1] http://rogeralsing.com/2008/12/07/genetic-programming-evolut...
by arocks on 12/12/12, 7:27 PM
Similarly vector icons appear more blurry than pixelled ones especially at lower resolutions. Extremely scaled images are unimpressive due to loss of detail.
Vector art often works well in a certain range of resolutions. The "infinite scaling" promise should be taken with a grain of salt.
by czr80 on 12/12/12, 7:54 PM
by nnnnni on 12/13/12, 1:19 AM