GLSL Convolution with Large Kernel in Texture Memory
I'm very new to GLSL, but I'm trying to write convolution kernel with in a fragment shader for image processing. I was able to do this just fine when my kernel was small (3x3) using a constant matrix. Now, however, I'd like to use a kernel of size 9x9. Or for that matter of arbitrary size. My initial thought was to setup a texture memory containing the convoluti开发者_如何学Pythonon kernel. Then using a 2Dsampler I'd read the texture memory of the kernel and convolve it with the texture memory of the image (also a 2Dsampler). Is this the right way to go about this?
I suppose you could also make an array of arbitrary size that contains coefficients. This might work for 81 coefficients, but what happens if you want something larger? Like say a 20x20?
In general if you need to access multiple large objects in GLSL what's the proper strategy? Thanks! Thanks,
D
Sequential access:
- Vertex Attributes
Random access:
- Texture Buffers / Uniform blocks if the source is a buffer
- Uniforms if the source is small
- Textures otherwise
Yes, since uniform and constant space is limited, using a texture as replacement is a good strategy.
精彩评论