Algorithm for culling pixels in a graphical data view?
I'm writing a wxpython widget which shows the state of several objects over time (x cycles). Right now I have it working with 1 pixel/cycle and zoomi开发者_开发百科ng in and back out to 1:1; but I would like to allow zooming out. I wanted to see if there are any go-to algorithms for thowing away/combining data before I started rolling my own using only my own feeble heuristics. Is there any such algo, or should I just start coding my own solution?
Depends a lot on what type of images you're resizing. See The myth of infinite detail: Bilinear vs. Bicubic and Better Image Resizing by our very own Jeff! There you can compare results of naive nearest neighbor, bilinear filtering, bicubic filtering, bicubic sharper and genuine fractals.
Jeff's conclusion:
Reducing images is a completely safe and rational operation. You're simply reducing precision and resolution by discarding information. Make the image as small as you want, and you have complete fidelity-- within the bounds of the number of pixels you've a> llowed. You'll get good results no matter which algorithm you pick. (Well, unless you pick the nave Pixel Resize or Nearest Neighbor algorithms.)
Enlarging images is risky. Beyond a certain point, enlarging images is a fool's errand; you can't magically s> ynthesize an infinite number of new pixels out of thin air. And interpolated pixels are never as good as real pixels. That's why it's more than a little artificial to upsize the 512x512 Lena image by 500%. It'd be smarter to find a higher resolution scan or picture of whatever you need* than it would be to upsize it in software.
But when you can't avoid enlarging an image, that's when it pays to know the tradeoffs between bicubic, bilinear, and more advanced resizing algorithms. At least arm yourself with enough knowledge to pick the best of the bad options you have.
精彩评论