开发者

Separating Axis Theorem is driving me nuts!

i am working on an implementation of the Separting Axis Theorem for use in 2D games. It kind of works but just kind of.

I use it like this:

bool penetration = sat(c1, c2) && sat(c2, c1);

Where c1 and c2 are of type Convex, defined as:

class Convex
{
public:
    float tx, ty;
public:
    std::vector<Point> p;
    void translate(float x, float y) {
        tx = x;
        ty = y;
    }
};

(Point is a structure of float x, float y)

The points are typed in clockwise.

My current code (ignore Qt debug):

bool sat(Convex c1, Convex c2, QPainter *debug)
{
    //Debug
    QColor col[] = {QColor(255, 0, 0), QColor(0, 255, 0), QColor(0, 0, 255), QColor(0, 0, 0)};
    bool ret = true;

    int c1_faces = c1.p.size();
    int c2_faces = c2.p.size();

    //For every face in c1
    for(int i = 0; i < c1_faces; i++)
    {
        //Grab a face (face x, face y)
        float fx = c1.p[i].x - c1.p[(i + 1) % c1_faces].x;
        float fy = c1.p[i].y - c1.p[(i + 1) % c1_faces].y;

        //Create a perpendicular axis to project on (axis x, axis y)
        float ax = -fy, ay = fx;

        //Normalize the axis
        float len_v = sqrt(ax * ax + ay * ay);
        ax /= len_v;
        ay /= len_v;

        //Debug graphics (ignore)
        debug->setPen(col[i]);
        //Draw the face
        debug->drawLine(QLineF(c1.tx + c1.p[i].x, c1.ty + c1.p[i].y, c1.p[(i + 1) % c1_faces].x + c1.tx, c1.p[(i + 1) % c1_faces].y + c1.ty));
        //Draw the axis
        debug->save();
        debug->translate(c1.p[i].x, c1.p[i].y);
        debug->drawLine(QLineF(c1.tx, c1.ty, ax * 100 + c1.tx, ay * 100 + c1.ty));
        debug->drawEllipse(QPointF(ax * 100 + c1.tx, ay * 100 + c1.ty), 10, 10);
        debug->restore();

        //Carve out the min and max values
        float c1_min = FLT_MAX, c1_max = FLT_MIN;
        float c2_min = FLT_MAX, c2_max = FLT_MIN;

        //Proj开发者_如何学Goect every point in c1 on the axis and store min and max
        for(int j = 0; j < c1_faces; j++)
        {
            float c1_proj = (ax * (c1.p[j].x + c1.tx) + ay * (c1.p[j].y + c1.ty)) / (ax * ax + ay * ay);
            c1_min = min(c1_proj, c1_min);
            c1_max = max(c1_proj, c1_max);
        }

        //Project every point in c2 on the axis and store min and max
        for(int j = 0; j < c2_faces; j++)
        {
            float c2_proj = (ax * (c2.p[j].x + c2.tx) + ay * (c2.p[j].y + c2.ty)) / (ax * ax + ay * ay);
            c2_min = min(c2_proj, c2_min);
            c2_max = max(c2_proj, c2_max);
        }

        //Return if the projections do not overlap
        if(!(c1_max >= c2_min && c1_min <= c2_max))
            ret = false; //return false;
    }
    return ret; //return true;
}

What am i doing wrong? It registers collision perfectly but is over sensitive on one edge (in my test using a triangle and a diamond):

//Triangle
push_back(Point(0, -150));
push_back(Point(0, 50));
push_back(Point(-100, 100));

//Diamond
push_back(Point(0, -100));
push_back(Point(100, 0));
push_back(Point(0, 100));
push_back(Point(-100, 0));

I am getting this mega-adhd over this, please help me out :)

http://u8999827.fsdata.se/sat.png


OK, I was wrong the first time. Looking at your picture of a failure case it is obvious a separating axis exists and is one of the normals (the normal to the long edge of the triangle). The projection is correct, however, your bounds are not.

I think the error is here:

float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;

FLT_MIN is the smallest normal positive number representable by a float, not the most negative number. In fact you need:

float c1_min = FLT_MAX, c1_max = -FLT_MAX;
float c2_min = FLT_MAX, c2_max = -FLT_MAX;

or even better for C++

float c1_min = std::numeric_limits<float>::max(), c1_max = -c1_min;
float c2_min = std::numeric_limits<float>::max(), c2_max = -c2_min;

because you're probably seeing negative projections onto the axis.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜