Matching, ranking based on relevance of each criteria
I have a growing database containing:
- a table of demands (multiple criteria)
- a table of offers (multiple criteria)
Criteria can be string (e.g.: country name), boolean, numeric, ...
I would like to find all demand-offer which match more or less (a little like job banks, matchmaking, ...)
If tables didn't contain many rows I would calculate as follows:
- for each demand, calculate for 开发者_如何学Pythoneach offer the relevance of matching by averaging the relevance of each criteria.
But for an important database this would take too much time, wouldn't it?
What solution do you recommend?
To extend the answer from June 24, 2010 -- Capture the pre-calculated relevance score in a Join Table (Demand Key, Offer Key, Relevance Score). Be aware that this Join Table could hold Count(Demands) * Count(Offers) records; it may be prudent to store only those records with Relevance Scores greater than some threshhold value.
This approach has a computation complexity of O(n) on Data Insert. You may be able to reduce this complexity to O(log(n)) if the assumption that the Feature Space for Demands (similarly for Offers) is such that two Demands that have high Relevancy Scores for the same Offer are also 'Close' within their Feature Space, holds:
- Perform a K-Nearest Neighbours analysis on the Comparison Features of Demands (and separately for Offers) and limit the value of K to be approximately log(n) of your data set.
- Calculate the Match Relevance between the representative Feature Vectors of the Clusters and store that in a database table.
- On Insert measure the 'Distance' between the new record and each of the Clusters of that type and store a Foreign Key to the Cluster on the new record.
- When searching for Offers to match a Demand -- follow the links from the Demand to its Cluster to the most relevant Offer Cluster to individual Offers.
This will trade specificity for speed. Verify the veracity of the initial assumption by comparing a sample of Demands to each Offer and sort on Relevance; traverse the ordered set of Offers and count how many you find in the result set from the Cluster Search before you find one that is missing. A subjective analysis of this test will give you an idea of how much Clustering is costing you.
I'd do it the way you describe - but with a rolling caching mechanism and some indexes.
Work out the relevance on creation and then see how it goes. Incremental additions could be fine if the frequency is low (high-read, low-write). If it is high on both ends, you could split into two databases... process updates on one, then periodically push the post-processed data into the other, which is the default source for reading.
精彩评论