Creating blur filter with a shader - access adjacent pixels from fragment shader?
I want to create a blur effect using a fragment shader in OpenGL ES 2.0. The algorithm I am interested in is simply an averaging blur - add all adjacent pixels to myself and divide by 9 to normalize.
However I have 2 issues:
1) does this require me to first render to a framebuffer, then switch rendering targets? Or is there an easier way
2) assume 开发者_如何学PythonI bind my "source" image to blur as texture 0, and I'm outputting my blurred texture. How do I access the pixels that aren't the one I'm current dealing with. The vert shader has invoked me for pixel i, but I need to access the pixels around me. How? And how do I know if I'm an edge case (literally at edge of screen)
(3: is there a more suitable algorithm to get a fuzzy frosted glass looking blur)
Elaborating a bit more on what Matias said:
Yes. You render the image into a texture (best done using FBOs) and in the second (blur) pass you bind this texture and read from it. You cannot perform the render and blur passes in one step, as you cannot access the framebuffer you're currently rendering into. This would introduce data dependencies, as your neighbours need not have their final color yet, or worse there color depends on you.
You get the current pixel's coordinates in the special fragment shader variable
gl_FragCoord
and use these as texture coordinates into the texture containing the previously rendered image and likewisegl_FragCoord.x +/- 1
andgl_FragCoord.y +/- 1
for the neighbours. But like Matias said, you need to devide these values by width and height (of the image) respectively, as texture coordinates are in [0,1]. By usingGL_CLAMP_TO_EDGE
as wrapping mode for the texture, the edge cases are handled automatically by the texturing hardware. So at an edge you still get 9 values, but only 6 distinct ones (the other 3, the ones actually outside the image, are just duplicates of their inside neighbours).
1) Yes, using a FBO is the way to go.
2) With math, if you are at pixel (x, y), then the neighbors are (x+1, y), (x, y+1), (x+1, y+1), (x-1, y), etc. Edge cases are handled with the wrap modes of the texture. Notice that since GL_TEXTURE_2D uses normalized coordinates, the offsets aren't 1, but 1 / width and 1 / height of the texture.
精彩评论