开发者

java network programming coordination of messaging

I have 2 processes running in different machines which are communicating via TCP sockets.

Both processes have code that acts both as a server and as a client.

I.e. ProcessA has opened a server socket that binds at portX and ProcessB has opened a server socket bind at portY.

ProcessA opens a client socket to connect with ProcessB and start sending messages as a client and receiving responses (over the same tcp connection of course).

ProcessB once it receives a message and processes it, it sends the response, but also could send a message over the second tcp connection, i.e. where ProcessB has opened a client socket to portX of ProcessA.

So the flow of messages are over 2 different tcp connections.

My problem is the following: Taking as granted that this "architecture" can not change and must stay as is:

I have the problem that intermittently, the messages send from ProcessB to ProcessA over the tcp connection that ProcessB has opened the client socket, arrive at processA before the messages send as responses from ProcessB to ProcessA over the tcp connection that ProcessA has connected as a client socket.

I.e. Both flows occur

(1)  
ProcessA ---->(msg)----> ProcessB(PortY)  (TCP1)
ProcessB does processing   
ProcessB(portY)--->(response)----->ProcessA (TCP1)  
ProcessB--->(msg)----->ProcessA(portX)  (TCP2)

(2)  
ProcessA ---->(msg)----> ProcessB(PortY)  (TCP1)
ProcessB does processing   
ProcessB--->(msg)----->ProcessA(portX)  (TCP2)
ProcessB(portY)--->(response)----->ProcessA  (TCP1)

EDIT (after ejp request) How can I enforce/make sure that ProcessB does not send a msg over the connection that ProcessB has a client socket open to server portX of ProcessA, before the message send as a reply from server portY of ProcessB arrives at processA? I.e. to have only flow (1) of the above.

Note that processB is multithreaded and the processing is non-trivial.

UPDATE: May be it is my misconception, but when a process sends data over socket, and control is returned to application, this does not mean that the receiving side has received the data. So if a process sends data over 2 sockets, is there a race condition by OS?

UPDATE2

After answer I got from Vijay Mathew:

If I did a locking as suggested, is there a guarantee that OS (i.e. IP layer) will send the data in order? I.e. finish one transmission, then send the next? Or I would they be multiple开发者_开发百科xed and have same issue?

Thanks


The obvious solution is:

LockObject lock;

ProcessA ---->(msg)----> ProcessB(PortY)

// Processing the request and sending its response 
// makes a single transaction.
synchronized (lock) {
    ProcessB does processing   
    ProcessB(portY)--->(response)----->ProcessA (TCP1)
}

// While the processing code holds the lock, B is not
// allowed to send a request to A.
synchronized (lock) {
    ProcessB--->(msg)----->ProcessA(portX)  (TCP2)
}


The synchronisation problem may not be in the TCP protocol, but in the thread handler choosing which thread to wake up when the messages arrive. I understand from the nature of your question that the PortX "(Msg)" is sent very quickly after after the PortY "(Response)". This means that the thread handler may occasionally have a choice as to which of the two listening threads it will wake.

A simple, but ugly and incomplete, way to fix the problem is to insert a short sleep between the response and the next message. The sleep would have to be long enough to be confident that the other process will have woken up to the response before the next message is received. This way is incomplete because although you are increasing the changes of properly synchronising your processing, issues like OS load and network congestion can always conspire to push your message right back up behind your response. And then you're back where you started. The other bit of ugliness is that the sleeping wastes time and will reduce your maximum throughput. Just less often. So...

To completely resolve the issue, you need some way for each socket listener to know whether the message it just received is the next one to be processed, or whether there might be earlier messages that have to be processed first. Do this by sequentially numbering all messages sent by each process. Then the receiving process knows if anything is missing.

You will have to think of a way for the listeners on each socket to confer between themselves to ensure that messages received are processed in order of transmission. There are a number of practical solutionsm, but they all amount to the same thing at the abstract, conceptual level.

THREAD 1: A) ProcessA(PortX) thread receives a message and wakes.
B) IF the sequence number indicates that there is a missing message, THEN B1) synchronize on ProcessA(PortY) and wait (). B2) On waking, back to B) C) {no message is missing} Process the message. D) Back to A)

THREAD 2: A) ProcessA(PortY) receives a response and wakes. B) Process the response. C) notifyAll (). D) Back to A)

The most generic practical solutions would probably involve a single socket listener instance adding all new messages to a PriorityQueue so the earliest-sent messages always go to the head of the queue. Then Thread 1 and Thread 2 could both wait on that instance until a message arrives that they can process.

A simpler but less extensible solution would be to have each thread do it's own listening and waiting with the (response) handler notifying after processing.

Good luck with it, although after all this time, it's probably solved already.


The obvious question is why do you care? If you have operations that need to be synchronized at either end, do so. Don't expect TCP to do it for you, that's not what it's for.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜