Precision loss with unprojecting and reprojecting
I am implementing a variant of the shadow mapping technique but I (think) am suffering from precision loss.
Here is what I do:
- I draw my scene from eyeposition to fill the depth buffer
- I unproject these points using gluUnproject
- I reproject these points from my lightsource as eyepoint using gluProject
- I then loop over all my triangles, project these f开发者_如何转开发rom my lightsource as eyepoint
-> For points (from the first step) that overlap with a triangle I compare depth. I compare the interpolated depth at pixel of the triangle with the depth I reprojected in step 2, if the triangle is closer it is mapped in shadow.
I use barycentric coordinates to interpolate depth at an irregular location. This means comparing three float values to zero, compare two floats to see which one is smaller, .. I used a bias on all the comparisons without any big effects (eps = 0.00001)
The algorithm is working nicely but I still have some artifacts and I think these can be attributed to the un - and reprojecting. Can this be?
I am using a perspective projection, my near = 1.0 and my far = 20.0. What can I do to improve this?
I'd be happy to show some code but it's quite a lot. So let's see what suggestions come out first.
Artifact http://img849.imageshack.us/img849/4420/artifactk.png
I read my pixels and unproject in this way:
//Get the original pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);
glReadPixels( 0, 0,800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
glReadPixels( 0, 300, 800, 300, GL_DEPTH_COMPONENT,GL_FLOAT, BUFFER_OFFSET(0));
//Process the first batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[0]);
GLfloat *pixels1 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels1, lightPoints, modelview, projection, viewport, 0);
//Process the second batch of pixels
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboIds[1]);
GLfloat *pixels2 = (GLfloat*)glMapBufferARB(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
processPixels( pixels2, lightPoints, modelview, projection, viewport, 1);
//Unamp buffers and restore default buffer
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[0]);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_EXT, pboIds[1]);
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
//Projecting the original points to lightspace
glLoadIdentity();
gluLookAt( light_position[0], light_position[1], light_position[2], 0.0,0.0,0.0,0.0,1.0,0.0);
//We get the new modelview matrix - Lightspace
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
//std::cout <<"Reprojecting" << std::endl;
GLdouble winX, winY, winZ;
Vertex temp;
//Projecting the points into lightspace and saving the sample points
for(vector<Vertex>::iterator vertex = lightPoints.begin(); vertex != lightPoints.end(); ++vertex){
gluProject( vertex->x, vertex->y, vertex->z,modelview, projection, viewport, &winX, &winY, &winZ );
temp.x = winX;
temp.y = winY;
temp.z = winZ;
// std::cout << winX << " " << winY << " " << winZ << std::endl;
samplePoints.push_back(temp);
}
My depth buffer is 24 bits, which I can't change afaik (ATI Radeon HD4570 and I am using GLUT).
I compare my depth:
if(rasterizer.interpolateDepth(A, B, C, baryc) < sample->z - 0.00001*sample->z){
stencilBits[(((int)sample->y*800 +(int)sample->x )) ] = 1;
both are floats.
Floats should be enouhg precision btw, in the paper I am basing myself on they use floats aswell. }
Try putting the far plane way further away: http://www.codermind.com/files/small_wnear.gif
Couple of suggestions: - implement regular shadow mapping without cpu things first - very, very carefully read the opengl pipeline math and make sure you get everything right, including rounding - you say interpolating depth. this sounds very wrong - you just can not linear interpolate depth as is (you can depth squared but i don't think you are doing that)
精彩评论