开发者

Should I be using a queue for this...?

I am working on an application which is dealing with 'sensitive' data (aka Credit Card numbers), and in order to attain PCI Compliance we need to ensure that our database is separate from our public servers.

In order to have the data come in and be stored there needs to be something in the middle - I don't want to have the data being written or read directly from the web/app servers - so I was wondering if a queue/worker arc开发者_StackOverflow中文版hitecture might be suitable.

The basic flow would be:

  1. Data from Client -> Send to API
  2. API Server puts data into 'request' object -> Enqueue
  3. Worker (on 'internal' network) picks up request, writes to DB, does work, updates DB, then enqueues a 'response' object
  4. API Server receives this response object then sends back to response to client

Essentially I am hoping for something where the data will come back to the same 'request' so the whole process can be done in a single request, but this seems to be going against the Asynchronous nature of Message Queues, and might be more suited to a 'web service' acting as a strict 'protocol' per se...

Edit I should proabably add that I require the following:

  • Durability - if the queue or whatever 'crashes' it should be able to recover 'queued' items
  • Security - the sensitive data needs to be protected - transport is fine, because we can use something on the transport layer (TLS, SSL, IPSec), however storing card numbers on the sender side (public network) is not ideal...
  • Speed - of course.

So, am I going about this the wrong way?


While I cannot say that you should, it certainly sounds like you could use one to good effect.

A queue will give you a level of isolation between components. If that is a necessary physical requirement to pass your certification, you can then show that the two machines are connected through a specific network, and only particular ports are open, etc.

Durability is a common feature of queue software, and transport level security similarly.

Speed is a fuzzier concern. In general, messages in a queue system, either commercial or open source (leaving alone roll-your-own) take mere milliseconds to transmit with durability and the like - add a bit of extra overhead for encryption. Assuming a correct granularity of your messages (i.e. they are not "too small" and the protocol not "too chatty") then you should do just fine.

There are many commercial and open-source message and queue systems, google is your friend to find them.

One left-field alternative would be to use a modern REST-like architecture. One of the best fleshed-out examples is DayTrader

Good Luck


I might be misunderstanding the question, but to boil it down, I think you might be overthinking the issue. There are ways to expose a web application server to the internet, while keeping the database safe behind the firewall, using some sort of SOA can increase that isolation, and hopefully decrease the possibility of some sort of sql injection attack, but it isn't automatic. Introducing a queue, which can be configured to be durable, giving you the recovery you desire, and some can be configured to be synchronous, so that would meet the "One step", or at least psuedo one step operation. But the fact of the matter is, that durability add another "security vulnerability" as that info in the queue is written somewhere, usually either a database or file system, until the transaction is committed. So in your posited situation where there is a crash, yes its recoverable but it could possibly be viewed by someone inside the enterprise with access to the file system area where that information is temporarily stored.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜