How to prevent concurrency in web service API?
We have three web services (/a
, /b
, /c
) where each service maps to a method (go()
) in a separate Java class (ClassA
, ClassB
, ClassC
).
Only one service should run at the same time (ie: /b
cannot run while /a
is running). However as this is a REST API there is nothing to prevent clients from requesting the services run concurrently.
What is the bes开发者_StackOverflow社区t and most simple method on the server to enforce that the services don't run concurrently?
Update: This is an internal app, we will not have a large load and will just have a single app server.
Update: This is a subjective question as you can make different arguments on the general application design which affects the final answer. Accepted overthink's answer as I found that most interesting and helpful.
Your design is flawed. The services should be idempotent. If the classes you have don't support that, redesign them until they do. Sounds like each of the three methods should be the basis for the services, not the classes.
Assuming it's not ok to just force the web server to have only one listening thread serving requests... I suppose I'd just use a static lock (ReentrantLock probably, for clarity, though you could sync on any shared object, really):
public class Global {
public static final Lock webLock = new ReentrantLock();
}
public class ClassA {
public void go() {
Global.webLock.lock()
try {
// do A stuff
} finally {
Global.webLock.unlock()
}
}
}
public class ClassB {
public void go() {
Global.webLock.lock()
try {
// do B stuff
} finally {
Global.webLock.unlock()
}
}
}
public class ClassC {
public void go() {
Global.webLock.lock()
try {
// do C stuff
} finally {
Global.webLock.unlock()
}
}
}
Firstly, without knowing your architecture, you are probably going to run into issues if you have to enforce concurrency restrictions on the WebService tiers. Whilst you could use traditional locks etc to serialise the requests accross both services, what happens when you add a second web tier to scale your solution? If the locks are local to the web layer they will be next to useless.
I'm guessing there is probably a layer of some sort that sits below the Web services and it's here you need to enforce these restrictions. If client B comes in after client A has made a conflicting request, then the backend should reject the request when it finds out the state has changed and you should then return a 409 to the second client. In the end race conditions are still possible but you have to have your lowest common layer protect your from conflicting requests.
You could use a semaphore of some kind to keep access across services serial.
Why not use hypermedia to constrain access?
Use something like,
POST /A
to initate the first process. The when it is complete the results should provide a link to follow to initiate the second process,
<ResultsOfProcessA>
<Status>Complete</Status>
<ProcessB href="/B"/>
</ResultsOfProcessA>
Follow the link to initate the second process,
POST /B
and repeat for part C.
Arguably a badly behaving client could cache the link to step B and attempt to re-use it in some future request to circumvent the sequence. However, it would not be too difficult to assign some kind of token when doing step A and require that the token be passed to step B and C to prevent the client from constructing the URL manually.
Reading your comments further, it seems that you have a situation where A could be run either before or after B. In this case I would suggest creating a resource D that represents the status of the entire set of processes (A,B and C). When a client retrieves D it is presented with the URIs that it is allowed to follow. Once a client has initiated the A process then the D resource should remove the B link for the duration of the processing. The opposite should occur when B is initiated before A.
The other advantage of this technique is that it is obvious if A or B has been run for the day as the status can be displayed in D. Once A and B have been run then D can contain a link for C.
The hypermedia is not a 100% foolproof solution because you could have two clients with the same copy of D and both might think that process A has not been run and both could attempt to run it simultaneously. This could be addressed by having some kind of "Last Modified" timestamp on D and you could update that timestamp whenever the status of D changes. This could allow the later request to be denied. Based on the description of your scenario it would seem that this is more of an edge case and the hypermedia would catch most attempts to run processes in parallel.
精彩评论