开发者

How to optimize this "Time apart" SQL query?

I would like to have an easy way to run queries like the follow开发者_开发技巧ing query on my time series data:

"What are the most typical events to take place within seven days of eachother"?

I could do this by utilizing SQL and a Java program, by looking at each row and run a query, which looks up all events seven days earlier or later, but this is not very elegant and performance will be horrible.

I also got help from JNK and Milen A. Radev to write the following SQL. My problem is that when I tested this on 23 million rows, it ran for two hours and stopped because my RamDisk (where I run the PostgreSQL database) was full. Do you have any idea how I could optimize such a query?

SELECT a.eventID, b.eventID, COUNT(*)
FROM table a
INNER JOIN table b
    ON a.eventID <> b.eventID
WHERE aBS(EXTRACT(EPOCH FROM (a.thetimeanddate - b.thetimeanddate))) < 5 
GROUP BY a.eventID, b.eventID 
ORDER BY COUNT(*) DESC
LIMIT 1000;


Part of the problem is that certain functions prevent the RDBMS from being able to infer certain properties of your query and then search any indexes. (Look up SARGABLE for more info on this.)

This means that the RDBMS is having to process Every combination of events and check the WHERE caluse to see if they are within 5 days of each other. Every combination equates to 529,000,000,000,000 combinations. (529 million million is quite a lot.)

If you reform the query to say "WHERE b.thetimeanddate has these properties", then you may find a performance boost. This will happen if you have an index covering [thetimeanddate] field. For example...

SELECT
  a.eventID,
  b.eventID,
  COUNT(*)
FROM
  table a
INNER JOIN
  table b
    ON a.eventID <> b.eventID
WHERE
      b.thetimeanddate >= date_trunc('day', a.thetimeanddate) - INTERVAL '5 days'
  AND b.thetimeanddate <  date_trunc('day', a.thetimeanddate) + INTERVAL '6 days'
GROUP BY
  a.eventID,
  b.eventID
ORDER BY
  COUNT(*) DESC
LIMIT
  1000
;

The RDBMS should now be able to much more easily make use of any index on the table that covers [thetimeanddate] field. It now just works out the truncated dates for each of the 29 million events that you have, and checks an index to see how many appear between "this date" and "that date". Quite possibly a few million times quicker than the alternative...

(I would also be tempted to move the WHERE clause into the ON clause, but only for style purposes. The performance will be identical. Remember, the RBDMS compiles these queries, and chooses algorithms and optimisations. If two queries can be algebraically manipulated in to being the same, they will normally yield the same final execution plan. [Assuming all the information needed for the manipulation exists in the query, and is not 'just known in your head'.])

EDIT

I've also noticed that you're grouping by Both a.eventID and b.eventID, then doing a count. Assuming the eventID is unique in the table, this would always yield a count of 1...

EDIT

Changed +5 to + INTERVAL '5 days'

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜