Writing huge Mongo result set to disk w/ Python in a resource friendly way
There is a Mongo collection with >5 Million items. I need to get a "representation" (held in a variable, or put into a file on disk, anything at this point) of a single attribute of all of the 'documents'.
My query is something like this:
cursor = db.collection.find({"conditional_field": {"subfield": True}}, {"field_i_want": True})
My first, silly, attempt was to Pickle 'cursor', but I quickly realized it doesn't work like that.
In this case, "field_i_want" contains an Integer. And as an example of something I've tried, I did this, and practically locked up the server for several minutes:
ints开发者_高级运维 = [i['field_i_want'] for i in cursor]
... to just get a list of the integers. This hogged CPU resources on the server for far too long.
Is there a remotely simple way to retrieve these results into a list, tuple, set, pickle, file, something, that won't totally hog the cpu?
Ideally I could dump the results to be read back in later. But I'd like to be as kind as possible while dumping them.
I think that streaming the results is likely to help here:
with open("/path/to/storage/file", "w") as f:
for row in cursor:
f.write(row['your_field'])
Don't hold everything in memory if you don't have to.
Though accepted already, I'd add that you might consider adding an index too. It's easy to think we've exhausted mongo's 'bandwidth' but it's 'mongo' for a reason! Depending on the structure of your database, 5 million responses can be perfectly fast; it sounds like in total your data will be around 5 million integers? For simplicity, we'll assume field_i_want and so on are variables holding the field names. If you do:
db.collection.ensure_index([(conditional_field, DESCENDING), (field_i_want, ASCENDING)])
for example, you'll be able to execute a 'covered query,' like this:
db.collection.find({conditional_field:True},fields={field_i_want:1, _id:-1})
Sometimes pymongo will arbitrarily decide it needs to translate the dictionary-syntax of mongodb into a list, as in the case of ensure_index and fields, above. I believe you can use a dictionary for fields, which is necessary for covered queries, but if not you'd need to look into how to do covered queries with the awkward syntax using a list. The important thing with a covered query is to only return the fields that are part of your index. So you don't want "_id" because though "_id" is automatically indexed, it's not part of the index that will be used. There will be no time at all executing the query with a covered query. It will simply hand you all the data you want instantly. If you'd rather have it as a list than a list of dictionaries ('documents'), you can just take the response and do something like:
[y for x,y in myquery.items()]
Mongo is already a binary representation and it's good at storage, so this may be one of those questions where the best answer is to keep honing the question. If you just want a dump, you can use the utilities that come with mongo and can be found in the same directory as your mongod binary. This would allow you to get your data into json, stored as a file (though again, it's currently stored as a file in bson).
精彩评论