开发者

How to use libsvm for text classification?

I'd like to write a spam filter program with SVM and I choose libsvm as the tool.

I got 1000 good mails and 1000 spam mails, then I classify them into :

700 good_train mails 700 spam_train mails

300 good_test mails 300 spam_test mails

Then I wrote a program to count the time of each words occur in each file, got result like:

good_train_1.txt:  
today 3  
hello 7  
help 5  
...    

I learned that libsvm needs format like:

1 1:3 2:1 3:0

2 1:3 2:3 3:1

开发者_开发百科 1 1:7 3:9

as its input. I know that 1, 2, 1 is the label, but what does 1:3 mean?

How could I transfer what I've got to this format?


Likely, the format is

classLabel attribute1:count1 ... attributeN:countN

N is the total number of different words in your text corpus. You will have to check the documentation for the tool you are using(or its sources), to see if you can use a sparser format by not including the attributes having count 0.


How could I transfer what I've got to this format?

Here's how I would do this. I would use the script you've got to compute the count of words for each mail in the training set. Then, use another script and transfer that data into the LIBSVM format that you've shown earlier. (This can be done in a variety of ways, but it should be reasonable to write with an easy input/output language like Python) I would batch all "good-mail" data into one file, and label that class as "1". Then, I would do the same process with the "spam-mail" data and label that class "-1". As nologin said, LIBSVM requires the class label to precede the features, but the features themselves can be any number as long as they are in ascending order, e.g. 2:5 3:6 5:9 is allowed, but not 3:23 1:3 7:343.

If you're concerned that your data is not in the correct format, use their script

checkdata.py

before training and it should report any possible errors.

Once you have two separate files with data in the correct format, you can call

cat file_good file_spam > file_training

and generate a training file that contains data on both good and spam mail. Then, do the same process with the testing set. One psychological advantage with forming the data this way is that you know the top 700 (or 300) mail in the training (or testing) set is good mail, and the remaining are spam mail. This makes it easier to create other scripts you may want to act on the data, such as a precision/recall code.

If you have other questions, the FAQ at http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html should be able to answer a few, as well as the various README files that come with installation. (I personally found the READMEs in the "Tools" and "Python" directories to be a great boon.) Sadly, the FAQ does not touch much on what nologin said, about data being in a sparse format.

On a final note, I doubt that you need to keep counts of every possible word that could appear in mail. I would recommend counting only the most common words you would suspect to appear in spam mail. Other potential features include total word count, average word length, average sentence length, and other possible data that you feel may be helpful.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜