django-haystack: choosing a large SearchIndex over an Indexed DB
I have a UserProfile Model with 35 fields (Char,Int,Bool,Dec,M2M,FK). As part of the search view functionality, one of the fields requires full-text sea开发者_C百科rching while the remaining 34 fields will be used to provide 'advanced search filtering' (using: __gte,__lte,__exact, __in, __startswith). A 'search' query may use between 5-35 fields as the search view criteria.
I'm using haystack to build a SearchIndex and currently have all 35 fields added, but this seems ineffective since I am bypassing the django ORM (?).
an answer from Filter Django Haystack results like QuerySet? suggests that I could just store the single full-text search field in the SearchIndex and combine the SearchQuerySet with django's QuerySet for the remaining 34 filter fields. Would I then use db_index=True on some or all of these fields in my django model? Would using this 2-stage query merge approach scale well to thousands of results?
Since my UserProfile model could grow to 300K-2M entries, I am trying to understand how best to index this Model. Being new to db indexing and searching, I am looking for any insight on how best to optimize my database.
Mixing the two is probably not scalable. When you're building a queryset or a SearchQuerySet, the query doesn't actually run until you ask for the result somewhere, so they're lazy in that sense.
But if you do something like
results = [ r.pk for r in searchqueryset ]
It actually executes that query against haystack/solr. If you're looking at 2M entries in total, that means your list can potentially return 2M items. And now you're sending a 2M list to MySQL(using ORM) to do further filtering. This will never scale up obviously.
If you just stick to haystack then keep building your searchqueryset, it will only be executed once when the results are accessed. Also keep in mind to reduce {{ result.object }}, because that also hits the database for each result.
You can look at load_all(), faceting etc.
精彩评论