开发者

Azure scalability over XML File

What is the best practise solution for programmaticaly changing the XML file where the number of instances are definied ? I know that this is somehow possible with this csmanage.exe for the Windows Azure API. How can i measure which Worker Role VMs are actually working? I asked this question on MSDN Community forums as well: http://soc开发者_JS百科ial.msdn.microsoft.com/Forums/en-US/windowsazure/thread/02ae7321-11df-45a7-95d1-bfea402c5db1


To modify the configuration, you might want to look at the PowerShell Azure Cmdlets. This really simplifies the task. For instance, here's a PowerShell snippet to increase the instance count of 'WebRole1' in Production by 1:

$cert = Get-Item cert:\CurrentUser\My\<YourCertThumbprint>
$sub = "<YourAzureSubscriptionId>"
$servicename = '<YourAzureServiceName>'
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-DeploymentConfiguration {$_.RolesConfiguration["WebRole1"].InstanceCount += 1}

Now, as far as actually monitoring system load and throughput: You'll need a combination of Azure API calls and performance counter data. For instance: you can request the number of messages currently in an Azure Queue:

http://yourstorageaccount.queue.core.windows.net/myqueue?comp=metadata

You can also set up your role to capture specific performance counters. For example:

 public override bool OnStart()
 {
    var diagObj= DiagnosticMonitor.GetDefaultInitialConfiguration();
    AddPerfCounter(diagObj,@"\Processor(*)\% Processor Time",60.0);
    AddPerfCounter(diagObj, @"\ASP.NET Applications(*)\Request Execution Time", 60.0);
    AddPerfCounter(diagObj,@"\ASP.NET Applications(*)\Requests Executing", 60.0);
    AddPerfCounter(diagObj, @"\ASP.NET Applications(*)\Requests/Sec", 60.0);

    //Set the service to transfer logs every minute to the storage account
    diagObj.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0);

    //Start Diagnostics Monitor with the new storage account configuration
    DiagnosticMonitor.Start("DiagnosticsConnectionString",diagObj);
}

So this code captures a few performance counters into local storage on each role instance, then every minute those values are transferred to table storage.

The trick, now, is to retrieve those values, parse them, evaluate them, and then tweak your role instances accordingly. The Azure API will let you easily pull the perf counters from table storage. However, parsing and evaluating will take some time to build out.

Which leads me to my suggestion that you look at the Azure Dynamic Scaling Example on the MSDN code site. This is a great sample that provides:

  • A demo line-of-business app hosting a wcf service
  • A load-generation tool that pushes messages to the service at a rate you specify
  • A load-monitoring web UI
  • A scaling engine that can either be run locally or in an Azure role.

It's that last item you want to take a careful look at. Based on thresholds, it compares your performance counter data, as well as queue-length data, to those thresholds. Based on the comparisons, it then scales your instances up or down accordingly.

Even if you end up not using this engine, you can see how data is grabbed from table storage, massaged, and used for driving instance changes.


Quantifying the load is actually very application specific - particularly when thinking through the Worker Roles. For example, if you are doing a large parallel processing application, the expected/hoped for behavior would be 100% CPU utilization across the board and the 'scale decision' may be based on whether or not the work queue is growing or shrinking.

Further complicating the decision is the lag time for the various steps - increasing the Role Instance Count, joining the Load Balancer, and/or dropping from the load balancer. It is very easy to get into a situation where you are "chasing" the curve, constantly churning up and down.

As to your specific question about specific VMs, since all VMs in a Role definition are identical, measuring a single VM (unless the deployment starts with VM count 1) should not really tell you much - all VMs are sitting behind a load balancer and/or are pulling from the same queue. Any variance should be transitory.

My recommendation would be to pick something that is not inherently highly variable to monitor (e.g. CPU). Generally, you want to find a trending point - for web apps it may be the response queue, for parallel apps it may be azure queue depth, etc. but for either they would be the trend and not the absolute number. I would also suggest measuring them at fairly broad intervals - minutes, not seconds. If you have a load you need to respond to in seconds, then realistically you will need to increase your running instance count ahead of time.


With regard to your first question, you can also use the Autoscaling Application Block to dynamically change instance counts based on a set of predefined rules.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜