开发者

Get and iterating over large dataset, what is ok? And why the difference in admin log/appstas?

When trying to optimize a query for getting store records based on location, I dumped in to something strange (I think), getting large dataset takes ALOT of cpu time.

Basically I have > 10开发者_如何学Go00 records that need to be iterated over to find stores within the 3000m of a users position, and I got quite high numbers in the admin console.

This lead to some datastore testing that resulted in some interesting numbers for getting 1000 records.

I have 6 tests methods to I ran separately and took the cpu time from the admin console and appstats and it resulted in this (in production):

    r = db.GqlQuery("SELECT __key__ FROM StoreRecords").fetch(1000)
    # appstats: real=120ms cpu=182ms api=845ms
    # admin console: 459ms 1040cpu_ms 845api_cpu_ms

    r = db.GqlQuery("SELECT __key__ FROM StoreRecords").fetch(100)
    # appstats: real=21ms cpu=45ms api=95ms
    # admin console: 322ms 134cpu_ms 95api_cpu_ms

    r = db.GqlQuery("SELECT * FROM StoreRecords").fetch(1000)
    # appstats: real=1208ms cpu=1979ms api=9179ms
    # admin console: 1233ms 10054cpu_ms 9179api_cpu_ms

    r = db.GqlQuery("SELECT * FROM StoreRecords").fetch(100)
    # appstats: real=57ms cpu=82ms api=929ms
    # admin console: 81ms 1006cpu_ms 929api_cpu_ms

    r = model.StoreRecords.all().fetch(1000)
    # appstats: real=869ms cpu=1526ms api=9179ms
    # admin console: 1061ms 9956cpu_ms 9179api_cpu_ms

    r = model.StoreRecords.all().fetch(100)
    # appstats: real=74ms cpu=86ms api=929ms
    # admin console: 97ms 1025cpu_ms 929api_cpu_ms

Here I only take at the most a 1000 records, but will need to fetch all (about 4-5000).

My questions are:

  1. Should a fetch of a 1000 records really take almost 20 (10054cpu_ms + 9179api_cpu_ms) seconds ?
  2. Why are there differences between the appstas and admin console times? What is calculated against my quota?

One could easily get around this by pushing the fetched records in to memcache as a protobuf. But I'm curios about the high usage and the differences in time between appstas and admin console.

Bonus question: How come that fetching 1000 records always result in 9179api_cpu_ms?


Why is it surprising that retrieving a lot of records takes a lot of resources? This is an O(n) process, and you really shouldn't be doing this on a per-request basis. To answer your questions in order:

  1. How much CPU time it uses depends on the nature of the records, but this result isn't surprising. Note that it's nearly 20 CPU seconds, not wallclock seconds. Also note that when the new billing model rolls out you'll be charged for datastore operations and instance hours, which is what you should be optimising for.
  2. The admin console shows the authoritative figures that you're billed based on. The appstats figures are lower because they only count the time spent during API calls, not the time spent executing your own code.

If your set of records is small and fairly static, you should cache them in instance memory rather than fetching them each time or storing them in memcache. If they're larger and dynamic, you should use something like GeoModel so you can do geographical queries and fetch only the relevant records.

Fetching 1000 records always takes the same amount of API CPU time because it's how datastore access costs are represented - it's not actually time taken. The new model fixes this by breaking it out into separate billable operations.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜