开发者

Design decision? 4 minute report... tying up the server

I have a few reports that are taking up to 4 minutes to run in my RoR application. Since then, my users are experiencing delays with their simple tasks in the systems where the system stops responding.

I am still trying to look into the main cause of the issue 开发者_如何转开发to see if the reporting and the system-hangs are related. Does anyone have any advise on how I should go about restructing my report to it does not take 4 minutes to run?

So far I have looked into running a background job every night that calculates all of the figures I want so the reporting times will be significantly decreased. I am yet to find a solution that may use middleware or in memory instances to derive reports instead of hitting the DB with a large query.

Technical info:

  • The VPS is running 384 MB of RAM (will an increase be a suficient work-around?)
  • The reports are a combination of SQLs with some heavy ruby scripting to massage the contents so they are displayed in a readible fashion
  • DB tables contain between 2,000 to 30,000 rows

Technical Report Info

  1. DB-query-1 has 5 inner joins. It is run first to create a lookup hash
  2. DB-query-2 has 5 inner joins. It is run once and each record of its resultset is traversed
  3. For each record of (2.), DB-query-3 (containing 5 inner joins) is run
  4. For each record of (2.), the lookup hash from (1.) is traversed to perform some calculations

From writing this through Anon's request of additional information I can already see the sting of a nested loop...


30,000 records is nothing. You should be able to perform a full dump export/import of a database that size in less than 4 seconds, so something seems not right :)

If you can provide us with some information about the tables, their relationships and what your report is about, I'm sure we can offer some specific advice on how to improve things.

Some general advice on performant reports: Don't use an ORM-tool for reports, unless the report requirements are very trivial (in which case it would be called a list, not a report). Go for a hand crafted SQL with attitude.

Don't listen to the Zealots who forbid you to use vendor specific SQL extensions. You paid (moneywise or time spent) for this specific dbms. Use it to its fullest!

Do as much work as humanly possible inside the database. (When using the extensions, this becomes much easier). Databases are built for searching, sorting and aggregating data, and they do it very well. A table with 1,000,000 rows at an average row size of 200 bytes will consume less than 200 MB. Assuming that the disk subsystem can deliver 50mb/s, it would take 4 seconds to perform a SUM(value) over the entire table. That's the theoretical upper limit. If you need more, you have to pre-compute data so that the working set becomes smaller (google for "aggregate" and "rollup").

This potentially means writing longer and more complicated queries. Long complex SQL often confuses the optimizer so that it comes up with a suboptimal execution plan. Solving this usually requires solid experience with the specific dbms product. Having a normalized database with full referential integrity and constraint checking makes this a less common issue though.

Don't run queries in a loop. If your query takes 1 ms to complete, it does not take 1 second to run 1000 queries. Each call has to "pay" for network delay, marshalling/unmarshalling the data, potential locking issues, parsing, fetching, etc.

(updated)

If the expected (and acceptable) runtime is more than a second or so, you can afford to lookup some values and provide them as literals to help the optimizer. This can be used to remove non-correlated subqueries and joins.

Keep user-defined functions to a minimum. They confuse the optimizer and often forces row-processing (which is non-scalable).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜