开发者

How to evaluate a search/retrieval engine using trec_eval?

Is there any body who has used TREC_EVAL? I need a "Trec_EVAL for dummies".

I'm trying to evaluate a few search engines to compare parameters like Recall-Precision, ranking quality, etc for my thesis work. I can not find how to use TREC_EVAL to send q开发者_C百科ueries to the search engine and get a result file which can be used with TREC_EVAL.


Basically, for trec_eval you need a (human generated) ground truth. That has to be in a special format:

query-number 0 document-id relevance

Given a collection like 101Categories (wikipedia entry) that would be something like

Q1046   0   PNGImages/dolphin/image_0041.png    0
Q1046   0   PNGImages/airplanes/image_0671.png  128
Q1046   0   PNGImages/crab/image_0048.png   0

The query-number identifies therefore a query (e.g. a picture from a certain category to find similiar ones). The results from your search engine has then to be transformed to look like

query-number    Q0  document-id rank    score   Exp

or in reality

Q1046   0   PNGImages/airplanes/image_0671.png  1   1   srfiletop10
Q1046   0   PNGImages/airplanes/image_0489.png  2   0.974935    srfiletop10
Q1046   0   PNGImages/airplanes/image_0686.png  3   0.974023    srfiletop10

as described here. You might have to adjust the path names for the "document-id". Then you can calculate the standard metrics trec_eval groundtrouth.qrel results. trec_eval --help should give you some ideas to choose the right parameters for using the measurements needed for your thesis.

trec_eval does not send any queries, you have to prepare them yourself. trec_eval does only the analysis given a ground trouth and your results.

Some basic information can be found here and here.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜