What kind of abstraction layers are typical between clients and servers in a request/response pattern?
So I'm brainstorming some stuff with a coworker right now with regards to backend API we're in the process of concepting out. It's a pretty straightforward read API, where a client requests certain data from a server and a server replies with that data.
We're just brainstorming ideas at the moment, and one "idea" that came up was a sort of intermediary or abstraction layer between the client and the server. The main reason for this is that the state on the server very rarely changes, but the client needs to check it constantly.
So, rather than having something like this:
Client <--> Server
You'd have:
Client <--> Intermediary <--> Server
Where the intermediary would be a super lightweight service capable of fielding requests quickly from the client开发者_高级运维. Basically it would sort of cache requests to the server, and if state did happen to change on the server, the server would notify the intermediary and in future requests the intermediary would respond with the updated data.
So, to my actual question. My actual question is, is there a name for this pattern, and is it relatively common (or uncommon)? Are there services or examples where something like this is implemented? Are there services that help one implement such a pattern? For example, I spent a bit of time investigating ZeroMQ, but it would seem that it used simply for message passing, and there is no way for the service to cache data or otherwise manage state on an intermediary like I'm envisioning.
Sorry this is all admittedly vague, but it truly is just some brainstorming we're doing. I'm mostly just wishing I could find a name for this concept or pattern so that I can dig and do more research, understand the pros and cons, etc.
I think its just called client/server architecture. A suggestion though. Instead of having this "intermediate server" in between the client and the server, why don't you just have the server broadcast out to all the connected clients whenever an update is available? That way your clients are not continually asking your server if anything changed.
I see this as a ClientProxy.
Conceptually what you really want is for the Server to tell the Clients something, but for whatever reason you can't deliver those events to the Client. So instead you create a Proxy that the server sends the events to. Then the ClientProxy and the real Client communicate in their own way, in this case by polling.
In passing, note that there are technologies such as Comet that allow Browser clients to receive push events.
In your proposed architecture, it sounds like "Intermediary" is simply a data cache. When Server detects data updates, he contacts Intermediary to update his cached data. Clients contact Intermediary, which is able to quickly respond with cached answers.
If the above assumption is correct, why not simply teach Server to cache data? That way, it will respond quickly (return cached data) whenever the client requests it. And when data is updated, it can instantly invalidate its own cache. It seems like that would be a much more easily-implemented and -maintained solution.
Are there reasons you need Intermediary to be its own separate and distinct entity?
精彩评论