Overhead of running a non-HTTP server in Azure instance?
I intend to build and deploy a custom server on Azure, I understand that incoming data will have to go through the load-balancer first before coming to the instances. So in order to listen for requests I need to listen on an assigned port from the load-balancer. My question is are there any latency overhead when the incoming data have to go through LB first? And do I need to change my code to be able to spread across many instances or the load balancer will handle these things for 开发者_C百科me?
There is going to be "some" latency due to the fact it's yet another part of the path. However, from load-balancer to your VM instance is going to be very fast, as the two are in the same data center. This is simply something that's part of the Windows Azure fabric.
The load balancer itself is not programmable by you, and basically provides a round-robin distribution. You need to make sure your server has no dependency on sticky sessions - there's absolutely no guarantee that a user who visits Server0 will then visit Server0 on the next visit.
Having said that: To share state information across instances, take a look at the AppFabric Cache, which went into production about a month ago. This is Cache-as-a-Service, and provides a very fast way of storing and retrieving key/value pairs. More information about AppFabric Cache is here.
Of course there is going to be some latency caused by the load balancer. But the LB can be quite simple, so the latency should be minimal.
And the LB does exactly what it says on the tin – it just balances load. So if your application is written is a way that it wouldn't work correctly if user A accessed it on sever X and user B accessed in on sever Y at the same time, it won't work even if both users think they're accessing server Z instead.
精彩评论