开发者

Efficiently detecting concurrent insertions using standard SQL

The Requirements

I hav开发者_如何学编程e a following table (pseudo DDL):

CREATE TABLE MESSAGE (
    MESSAGE_GUID GUID PRIMARY KEY,
    INSERT_TIME DATETIME
)

CREATE INDEX MESSAGE_IE1 ON MESSAGE (INSERT_TIME);

Several clients concurrently insert rows in that table, possibly many times per second. I need to design a "Monitor" application that will:

  1. Initially, fetch all the rows currently in the table.
  2. After that, periodically check if there are any new rows inserted and then fetch these rows only.

There may be multiple Monitors concurrently running. All the Monitors need to see all the rows (i.e. when a row is inserted, it must be "detected" by all the currently running Monitors).

This application will be developed for Oracle initially, but we need to keep it portable to every major RDBMS and would like to avoid as much database-specific stuff as possible.

The Problem

The naive solution would be to simply find the maximal INSERT_TIME in rows selected in step 1 and then...

SELECT * FROM MESSAGE WHERE INSERT_TIME >= :max_insert_time_from_previous_select

...in step 2.

However, I'm worried this might lead to race conditions. Consider the following scenario:

  1. Transaction A inserts a new row but does not yet commit.
  2. Transaction B inserts a new row and commits.
  3. The Monitor selects rows and sees that the maximal INSERT_TIME is the one inserted by B.
  4. Transaction A commits. At this point, A's INSERT_TIME is actually earlier than the B's (A's INSERT was actually executed before B's, before we even knew who is going to commit first).
  5. The Monitor selects rows newer than B's INSERT_TIME (as a consequence of step 3). Since A's INSERT_TIME is earlier than B's insert time, A's row is skipped.

So, the row inserted by transaction A is never fetched.

Any ideas how to design the client SQL or even change the database schema (as long as it is mildly portable), so these kinds of concurrency problems are avoided, while still keeping a decent performance?

Thanks.


Without using any of the platform-specific change data capture (CDC) technologies, there are a couple of approaches.

Option 1

Each Monitor registers a sort of subscription to the MESSAGE table. The code that writes messages then writes each MESSAGE once per Monitor, i.e.

CREATE TABLE message_subscription (
  message_subscription_id NUMBER PRIMARY KEY,
  message_id RAW(32) NOT NULLL,
  monitor_id NUMBER NOT NULL,
  CONSTRAINT uk_message_sub UNIQUE (message_id, monitor_id)
);

INSERT INTO message_subscription
  SELECT message_subscription_seq.nextval,
         sys_guid,
         monitor_id
    FROM monitor_subscribers;

Each Monitor then deletes the message from its subscription once that is processed.

Option 2

Each Monitor maintains a cache of the recent messages it has processed that is at least as long as the longest-running transaction could be. If the Monitor maintained a cache of the messages it has processed for the last 5 minutes, for example, it would query your MESSAGE table for all messages later than its LAST_MONITOR_TIME. The Monitor would then be responsible for noting that some of the rows it had selected had already been processed. The Monitor would only process MESSAGE_ID values that were not in its cache.

Option 3

Just like Option 1, you set up subscriptions for each Monitor but you use some queuing technology to deliver the messages to the Monitor. This is less portable than the other two options but most databases can deliver messages to applications via queues of some sort (i.e. JMS queues if your Monitor is a Java application). This saves you from reinventing the wheel by building your own queue table and gives you a standard interface in the application tier to code against.


You need to be able to identify all rows added since the last time you checked (i.e. the monitor checks). You have a "time of insert" column. However, as you spell it out, that time of insert column cannot be used with "greater than [last check]" logic to reliably identify subsequently inserted new items. Commits do not occur in the same order as (initial) inserts. I am not aware of anything that works on all major RDBMSs that would clearly and safely apply such an "as of" tag at the actual time of commit. [This is not to say I would know it if such a thing existed, but it seems a pretty safe guess to me.] Thus, you will have to use something other than a "greater than last check" algorithm.

That leads to filtering. Upon insert, an item (row) is flagged as "not yet checked"; when a montior logs in, it reads all not yet checked items, returns that set, and flips the flag to "checked" (and if there are multiple monitors, this must all be done within its own transaction). The monitors' queries will have to read all the data and pick out which have not yet been checked. The implication is, however, that this will be a fairly small set of data, at least relative to the entire set of data. From here, I see two likely options:

  • Add a column, perhaps "Checked". Store a binary 1/0 value for is/isnot checked. The cardinality of this value will be extreme -- 99.9s Checked, 00,0s Unchecked, so it should be rather efficient. (Some RDBMSs provide filtered queries, such that the Checked rows won't even be in the index; once flipped to checked, a row will presumably never be flipped back, so the overhead to support this shouldn't be too great.)
  • Add a separate table identify those rows in the "primary" table that have not yet been checked. When a montior logs in, it reads and deletes the items from that table. This doesn't seem efficient... but again, if the data set involved is small, the overall performance pain might be acceptable.


You should use Oracle AQ with a multi-subscriber queue.

This is Oracle specific, but you can create an abstraction layer of stored procedures (or abstract in Java if you like) so that you have a common API to enqueue the new messages and have each subscriber (monitor) dequeue any pending messages. Behind that API, for Oracle you use AQ.

I am not sure if there is a queuing solution for other databases.

I don't think you will be able to come up with a totally database agnostic approach that meets your requirements. You could extend the example above that included the 'checked' column, to have a second table called monitor_checked - that would contain one row per message per monitor. That is basically what AQ does behind the scenes, so it is sort of reinventing the wheel.


With PostgreSQL, use PgQ. It has all those little details worked out for you.

I doubt you will find a robust and manageable database-agnostic solution for this.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜