开发者

Comparison Scoring Algorithm

I am having a tricky time determining a scoring algorithm for something like a peer-reviewed research paper.

Take this class:

public class ResearchPaper
{
    int StudentID;
    int PaperID;
    string Title;
    byte Grade;
}

Grade is a score from 0 to 100 (F to A+) that is originally assigned by the teacher.

After the grade has been assigned, it can be (indirectly) modified by feedback from peers through paper comparisons. If a peer says that paper B (grade 75) is BETTER than paper A (grade 80), then paper A loses a point (new grade 79) and paper B gains a point (new grade 76). This can happen thousands of times, and paper B could end up with a better grade than paper A (which is fine).

My plan was to NOT change the grades if the peer 开发者_如何学编程review agrees that paper A actually IS better than paper B (the way the teacher graded them), or paper A would gain points in a runaway process until it reached 100 (which is set as the max).

The problem with this algorithm is that with a LOT of peer reviews, eventually all papers approach the same grade through even relatively uncommon grade changes, which would effectively ignore the originally assigned grade by the teacher.

Is there a better algorithm for something like this?


I think you have a pretty good system, but I would make Grade a sbyte and separate the peer reviews from the initial graded score. If you have more than 100 papers, and a single paper is not selected for an upvote, you'll have serious problems with class morale if you don't use signed values.

public class ResearchPaper
{
    int StudentID;
    int PaperID;
    string Title;
    sbyte InitialGrade;
    sbyte PeerRating;

    public sbyte Grade
    {
        get
        {
            return (InitialGrade + PeerRating);
        }
    }
}

public class PaperGrader
{
    List<ResearchPaper> Papers;

    public PaperGrader(List<ResearchPaper> Papers)
    {
        this.Papers = Papers;
    }

    public void Vote(int Id)
    {
        foreach (ResearchPaper Paper in Papers)
        {
            if (Paper.PaperID == Id)
            {
                Paper.PeerRating++;
                continue;
            }
            else
            {
                Paper.PeerRating--;
            }
        }
    }
}

An interesting side effect of this code is that you can decide which papers the students liked or disliked the most. Which in an ideal world would give you results resembling the following:

Comparison Scoring Algorithm


If you are only adjusting the grades if peer review is in the opposite direction of comparison from the original grades, the only possible outcome with a large number of peer reviews is that they end up with the same grade.

Are you set on doing the peer reviews by comparison, rather than grades? The problem is that such a system also will have a bias for papers whose original grades are very far apart/very close when compared.

Perhaps the peer reviews could also be grades just like the normal grades. Store them separately; if a paper has enough peer grades, they can be averaged in with the original grade, but at a much lower level of effectiveness; perhaps 100:1 even.

I also think you might want a higher level of precision for the grades if you are going to be doing any kind of system like this.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜