开发者

Machine learning algorithm for data classification. [closed]

As it currently stands, this question is not a good fit for our Q&A开发者_运维问答 format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 10 years ago.

I'm looking for some guidance about which techniques/algorithms I should research to solve the following problem. I've currently got an algorithm that clusters similar-sounding mp3s using acoustic fingerprinting. In each cluster, I have all the different metadata (song/artist/album) for each file. For that cluster, I'd like to pick the "best" song/artist/album metadata that matches an existing row in my database, or if there is no best match, decide to insert a new row.

For a cluster, there is generally some correct metadata, but individual files have many types of problems:

  • Artist/songs are completely misnamed, or just slightly mispelled
  • the artist/song/album is missing, but the rest of the information is there
  • the song is actually a live recording, but only some of the files in the cluster are labeled as such.
  • there may be very little metadata, in some cases just the file name, which might be artist - song.mp3, or artist - album - song.mp3, or another variation

A simple voting algorithm works fairly well, but I'd like to have something I can train on a large set of data that might pick up more nuances than what I've got right now. Any links to papers or similar projects would be greatly appreciated.

Thanks!


If I understand your problem correctly, you have an existing technique for dividing songs, etc., into clusters and now want to select a "best" example of the contents of that cluster based on whatever the defining characteristics are.

I would take a look at Bayesian classifiers. These could help with inferring the key defining characteristics of any given cluster in your data (assuming that clustering was not based on an explicit, well-defined taxonomy) as well as provide some tolerance for noise and error in the metadata or other parameters. Then depending on the nature of your data and clusters, you could perhaps use maximum likelihood or sampling methods to determine one or more most representative examples from a given cluster.

Bayesian methods can also be useful for inferring missing data, e.g., missing metadata values. The sample distribution can be used to generate likely values for the missing data based on the known values in other data fields.


The Levenshtein Distance is a metric to measure the "distance" between two strings. It counts the number of operations to change one string into the other by adding/removing/changing characters.

You could use this algorithm to help deal with misspellings. If two strings are very close then it is most likely a misspelling.

http://en.wikipedia.org/wiki/Levenshtein_distance

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜