开发者

Building a scalable server

So i'm developing a server application, that has to store hundreds of thousands (up to a few million in some cases) of classes, serialize them to an SQL database, and load them back several times, and it appears that storing that many class objects to a List is whats throwing an out of memory exception? i think.

So that brings the questions

  • how can i avoid such errors while still handling all of my million or so classes?
  • are there other problems that can come from having this much data?
  • what other things can i do to make sure my server is fully scalable and can ultimately handle and manage as much data possible?

The point of this question being, i will need this many classes all running in memory, as i will need to be continually updating them in such a way that would take longer than开发者_高级运维 i'd like to serialize to an SQL database. Right now, currently, im using less memory then i'd ultimately need even!


You probably mean objects, not classes ;-)

A scalable processing architecture usually involves the following:

At any point in time, have only a limited number of objects in memory (could be one, could be ten, could be a hundred, but if it has to be "however many I'll need" then you must rethink your design). This ensures that you never run out of memory, because the maximum memory usage is fixed.

All objects are stored in a database. When you need an object that's not in memory, load it from the database. Don't keep it around unless it's part of the previously mentioned short list of objects.

To take advantage of memory not used by your short list, insert a caching layer between your code and the database, so that if you end up fetching the same object a lot, the cost for doing so will be reduced. The cache strategy means your software will only trade memory for speed if there's memory available.

Try to work using small transactions that read some things, write some things back, then start again. This helps your software resume from where it left, should a crash or outage happen while it's processing. The database should be enough to start over again from where it left.

By working with independent transactions, it's possible to have multiple worker processes (either on the same computer, or on a computing grid) working on the same database. If you can, implementing a transactional worker-based model is great for performance, and makes it much easier to just throw more computers at the problem.


Firstly, the obvious: Make sure you have enough RAM. Analyze your code to find out (approximately) how many objects you will have in memory at the same time and then use a memory profiler. See this related question: How much memory does a C#/.NET object use?

Secondly, if you really need millions of objects, it might make sense to rethink your design. In many cases, something simple like a large, multi-dimensional array might be more efficient (and more predictable memory-wise) than a complex tree of .net classes. Whether this advice applies to your case or not, I cannot say with the data at hand.

Thirdly, if it's not necessary to have all this data in memory at the same time, then don't do it. SQL databases are quite fast nowadays (and use smart caching mechanisms), so it might make sense to have only the objects in your list that you currently need (rather than loading everything into memory). In addition, searching through an SQL database index might even be faster than traversing a huge in-memory list.


It may be worth caching some of your database data that is frequently read into something like Memcached. http://en.wikipedia.org/wiki/Memcached

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜