开发者

Distributed datastore

We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later.

The way our client application works :

  • it gets a stream of all the data
  • it fetches the right file (GET)
  • it adds the new content
  • it saves the file back (PUT)

We're looking for an efficient way to stor开发者_开发知识库e this data that can scale horizontally as the amount of data we're getting is doubling every few weeks.

We initially looked at S3. It works fine, but becomes very expensive very fast (>$1000 monthly just in PUT operations!)

We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow.

Any other solution out there?


There are lots of knobs you can turn in Riak - ask the mailing list if you haven't already and we'll figure out a sane configuration for you. 60 writes/sec is not within the norm.

See: http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


What about Hadoop's HDFS spread over Amazon EC2 instances? I know each instance has a good amount of storage space, and you don't have to pay for put/get, only the inbound transfer.


I would suggest looking at CloudIQ Storage from Appistry. Its a fully distributed file store. Its accessible via a REST-based API, and can run on commodity hardware. You can define the number of copies retained on a file by file basis. It supports an Eventually Consistent model so you can balance file consistency with performance.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜