开发者

How to blur a Bitmap (Android)?

I am struggling to get Bitmaps blurred using Android.开发者_运维知识库

I have seen a lot of information about using a simple kernel like

0    0    0    5    0    0    0
0    5   18   32   18    5    0
0   18   64  100   64   18    0
5   32  100  100  100   32    5
0   18   64  100   64   18    0
0    5   18   32   18    5    0
0    0    0    5    0    0    0

My problem is that I am really not sure how to multiply this with my Bitmap in an efficient way.

Should I go through every pixel and

image.getPixel(x, y)

while storing those values to a new array (so I don't have to get those values over and over again) and then go through the array and for each value add up the surrounding values multiplied by the corresponding field in the kernel divided by 1068 (in the case of the above kernel (= all entries summed up))?

Is there any better way to do this? Is there a simple solution for the borders?

Or is there even something available in the Android SDK I missed?


What you are doing is basically 2D convolution between original image I and kernel K (kernel is actually PSF - point spread function). If your image I is of size m x n, and kernel is of size r x s, for each point of blurred image J you need r x s multiplications, resulting in total m x n x r x s multiplications for the whole image.

Computationally more efficient approach would be to use DFT (Discrete Fourier Transform). Make transforms of the image and of the kernel, and multiply them in the domain of transform, and then revert back via Inverse DFT. In short:

J = IDFT(DFT(I)*DFT(K))

For DFT computation fast algorithms (FFT - Fast Fourier Transform) exist. You can find them in C source on the Internet. In order to use C source, you need to use JNI (Java Native Interface), supported by the Android platform.

Regarding borders, when using DFT you have no issues, since blurring at the border is done circularly (e.g. left border values are calculated using also some right border values).

If you are working with the kernels that may be separated (2D kernel represented as outer product of 1-D kernels), then it becomes more simple. 2D convolution can be represented as 1-D convolutions over rows and then over columns (or vice versa). The same is true for blurring using DFT.


Try to make use of BlurMaskFilter.

example of usage:

http://www.anddev.org/decorated_and_animated_seekbar_tutorial-t10937.html

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜