开发者

Specialised algorithm to find positive real solutions to quartic equations? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 3 years ago.

Improve this question 开发者_如何学JAVA

I'm looking for a specialised algorithm to find positive real solutions to quartic equations with real coefficients (also know as bi-quadratic or polynomial equations of order 4). They have the form:

a4 x4 + a3 x3 +a2 x2 +a1 x + a0 = 0

with a1, a2,... being real numbers.

It's supposed to run on a microcontroller, which will need to do quite a lot of those calculations. So performance is an issue. That's why I'm looking for a specialised algorithm for positive solutions. If possible I'd like it to compute the exact solutions.

I know there is a general way to compute the solution of a quartic equation but it is rather involved in terms of computation.

Can someone point me in the right direction?

Edit:

Judging from the answers: Some seem to have misunderstood me (though I was pretty clear about it). I know of the standard ways of solving quartic equations. They don't do it for me - neither they fit in the memory nor are they sufficiently fast. What I would need is a high accuracy highly efficient algorithm to find only real solutions (if that helps) to quartic equations with real coefficients. I'm not sure there is such an algorithm, but I thought you guys might know. P.S.: The downvotes didn't come from me.


This is one of those situations where it is probably easier to find all the roots using complex arithmetic than to only find the positive real roots. And since it sounds like you need to find multiple roots at once, I would recommend using the Durand-Kerner method, which is basically a refinement of the method of Weierstrass:

http://en.wikipedia.org/wiki/Durand%E2%80%93Kerner_method

Weierstrass' method is in turn a refinement of Newton's method that solves for for all the roots of the polynomial in parallel (and it has the big advantage that it is brain-dead easy to code up). It converges at about quadratic rate in general, though only linearly for multiple roots. For most quartic polynomials, you can pretty much nail the roots in just a few iterations. If you need a more general purpose solution, then you should use instead use Jenkins-Traub:

http://en.wikipedia.org/wiki/Jenkins%E2%80%93Traub_method

This is faster for higher degree polynomials, and basically works by converting the problem into finding the eigenvalues of the companion matrix:

http://en.wikipedia.org/wiki/Companion_matrix

EDIT: As a second suggestion, you could also try using the power method on the companion matrix. Since your equation has only non-negative coefficients, you may find it useful to apply the Perron-Frobenius theorem to the companion matrix. At minimal, this certifies that there exists at least one non-negative root:

http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem


Yes, there are general ways. You need a root finding algorithm, like bracketing and bisection, secant, false position, Ridder, Newton-Raphson, deflation, Muller, Laguerre, or Jenkins-Traub - did I leave anyone out?

Check out "Numerical Recipes" for details.


Take a look at Ferrari's method. It involves quite a bit of computation, however, but may serve your needs.


Can you supply good start values to ensure that you always find all solutions. Newtons method would converge fast.

I checked in Maxima:

solve(a*x^4+b*x^3+c*x^2+d*x+c=0,x);

The solution looks indeed horrible. You can easily run into stability problems. This happens whenever you subtract two floating point numbers that have close values.

However, if the coefficients are constant you can just implement the direct formula. You can get the solution either by installing maxima or you can enter the equation on wolframalpha.com


No. There is no magic way to find the roots of a 4th order polynomial equation, at least not without doing the work. Yes, there is a formula for 4th order polynomials that involves complex arithmetic, but it will return all the roots, complex, real positive, negative. Either use an iterative scheme, or do the algebra.

You are lucky there is an analytical solution at all. Had you a 5th order polynomial, that would not even exist.


I realize this answer is rather late, but I think a good alternative to the methods already mentioned is the TOMS Algorithm 326, which is based on the paper "Roots of Low Order Polynomials" by Terence R.F.Nonweiler CACM (April 1968).

This is an algebraic solution to 3rd and 4th order polynomials that is reasonably compact and fast. It is certainly much simpler and much faster than Jenkins Traub and doesn't require iteration.

Don't use the TOMS code however, as it was done rather poorly.

A rewrite of this algorithm is given here, which was rigorously tested for accuracy.


Here is the C/C++ code that I wrote. But it only gives you real roots of the equation

Here is the C/C++ code that I wrote. But it only gives you real roots of the equation

#include <stdio.h>
#include <iostream>
#include <math.h>
/*--------------------------------------------

 --------------------------------------------*/
double cubic(double b,double c,double d)
{
    double p=c-b*b/3.0;
    double q=2.0*b*b*b/27.0-b*c/3.0+d;

    if(p==0.0) return pow(q,1.0/3.0);
    if(q==0.0) return 0.0;

    double t=sqrt(fabs(p)/3.0);
    double g=1.5*q/(p*t);
    if(p>0.0)
    return -2.0*t*sinh(asinh(g)/3.0)-b/3.0;


    if(4.0*p*p*p+27.0*q*q<0.0)
    return 2.0*t*cos(acos(g)/3.0)-b/3.0;

    if(q>0.0)
    return -2.0*t*cosh(acosh(-g)/3.0)-b/3.0;

    return 2.0*t*cosh(acosh(g)/3.0)-b/3.0;
}
/*--------------------------------------------

 --------------------------------------------*/
int quartic(double b,double c,double d,double e,double* ans)
{

    double p=c-0.375*b*b;
    double q=0.125*b*b*b-0.5*b*c+d;
    double m=cubic(p,0.25*p*p+0.01171875*b*b*b*b-e+0.25*b*d-0.0625*b*b*c,-0.125*q*q);
    if(q==0.0)
    {
        if(m<0.0) return 0;
        int nroots=0;
        double sqrt_2m=sqrt(2.0*m);
        if(-m-p>0.0)
        {
            double delta=sqrt(2.0*(-m-p));
            ans[nroots++]=-0.25*b+0.5*(sqrt_2m-delta);
            ans[nroots++]=-0.25*b-0.5*(sqrt_2m-delta);
            ans[nroots++]=-0.25*b+0.5*(sqrt_2m+delta);
            ans[nroots++]=-0.25*b-0.5*(sqrt_2m+delta);
        }

        if(-m-p==0.0)
        {
            ans[nroots++]=-0.25*b-0.5*sqrt_2m;
            ans[nroots++]=-0.25*b+0.5*sqrt_2m;
        }

        return nroots;
    }

    if(m<0.0) return 0;
    double sqrt_2m=sqrt(2.0*m);
    int nroots=0;
    if(-m-p+q/sqrt_2m>=0.0)
    {
        double delta=sqrt(2.0*(-m-p+q/sqrt_2m));
        ans[nroots++]=0.5*(-sqrt_2m+delta)-0.25*b;
        ans[nroots++]=0.5*(-sqrt_2m-delta)-0.25*b;
    }

    if(-m-p-q/sqrt_2m>=0.0)
    {
        double delta=sqrt(2.0*(-m-p-q/sqrt_2m));
        ans[nroots++]=0.5*(sqrt_2m+delta)-0.25*b;
        ans[nroots++]=0.5*(sqrt_2m-delta)-0.25*b;
    }

    return nroots;
}
/*--------------------------------------------

 --------------------------------------------*/
int main(int nargs,char* args[])
{
    if(nargs!=6)
    {
        printf("5 arguments are needed\n");
        return EXIT_FAILURE;
    }
    double a=atof(args[1]);
    double b=atof(args[2]);
    double c=atof(args[3]);
    double d=atof(args[4]);
    double e=atof(args[5]);
    if(a==0.0)
    {
        printf("1st argument should be nonzero\n");
        return EXIT_FAILURE;
    }

    int nroots;
    double ans[4];
    nroots=quartic(b/a,c/a,d/a,e/a,ans);
    if(nroots==0)
        printf("Equation has no real roots!\n");
    else
    {
        printf("Equation has %d real roots: ",nroots);
        for(int i=0;i<nroots-1;i++) printf("%.16lf, ",ans[i]);
        printf("%.16lf\n",ans[nroots-1]);
    }

    return EXIT_SUCCESS;
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜