开发者

Preferred way of logging in .Net deployed to Azure

Would you say this is the most optimal way of doing simple traditional logging in a Azure deployed application?

If feels like a lot of work to actually get to the files etc ...

What's开发者_StackOverflow worked best for you?


We use the build in diagnostics that writes to Azure Table storage. Anytime we need a message written to a log, it's just a "Trace.WriteLine(...)".

Since the logs are written to Azure Table Storage, we have a process that will download the log messages, and remove them from the table storage. This works well for us, but I think it probably depends on the application.

http://msdn.microsoft.com/en-us/library/gg433048.aspx

Hope it helps!

[Update]

public void GetLogs() {
        int cnt = 0;
        bool foundRows = false;
        var entities = context.LogTable;
        while (1 == 1) {
            foreach (var en in entities) {
                processLogRow(en);
                context.DeleteObject(en);
                cnt++;
                try {
                    if (cnt % 100 == 0) {
                        foundRows = true;
                        context.SaveChanges(SaveChangesOptions.Batch);
                    }
                } catch (Exception ex) {
                    Console.WriteLine("Exception deleting batch. {0}", ex.Message);
                }
            }
            if (!foundRows)
                break;
            else {
                context.SaveChanges(SaveChangesOptions.Batch);
            }
            foundRows = false;
        }
        Console.WriteLine("Done! Total Deleted: {0}", cnt);
    }


Adding a bit to Brosto's answer: It takes only a few lines of code to configure Azure Diagnostics. You decide what level you want to capture (verbose, informational, etc.). and how frequently you want to push locally-cached log messages to Azure storage (I usually go with something like 15 minute intervals). Log messages from all of your instances are then aggregated into the same table, easily queryable (or downloadable), with properties defining role and instance.

There are additional trace statements, such as Trace.TraceError(), Trace.TraceWarning(), etc.

You can even create a trace listener and watch your log output in almost-realtime on your local machine. The Azure AppFabric SDK Samples zip contains a sample (under \ServiceBus\Scenarios\CloudTrace) for doing this.


For error logging the best solution I saw is Elmah. It requires SQL database, but this is the error loggin tool that actually helps diagnose problems. It works fine on Azure.


For all my Azure sites I use custom logging to Azure tables. Although a bit more work, I find it gives me more control over the information that gets stored. Like Brosto above commented, it is best to have a local process that periodically downloads the logs to your local system. If you derive a class from TableServiceEntity you can define a structure containing all the fields you wish to log and use the same class to retrieve the data in your local application that retrieves the logs. I maintain some examples of the code to do this on my logging using Azure table storage page if it's of any help to anyone.

One of the problems I have experienced with using the Trace.Writeline method is that the logs are stored on the local instance and are periodically transferred to Azure table storage. Given the transient nature of an Azure instance, all local storage must be considered temporary at best. Therefore there is always a window for losing your log data while it is held on the local drive.

Given the cost of Azure table storage transactions, logging directly to Azure storage is extremely cost effective. If performance is a major issue for you, it may be worthwhile dedicating a separate thread (or threads) to servicing a memory held queue of logging data. Although this obviously gives similar issues with transient data if the Azure instance is recycled, the window for this to happen should be much smaller.


As was already mentioned, using Windows Azure Diagnostics is the way to go. However, all the logging from all your instances ends up in one big list, which can be hard to read through. Therefore I try to only send relatively important messages (Warn level and higher) to the diagnostics tables. Even so it's a pain to read the table directly. There are a few tools out there, I personally use Cerebrata Diagnostics Manager.

Although using the Trace functions directly works fine, I'd suggest using a logging framework such as NLog or log4net. That gives you a bit more flexibility to send some messages Trace/Azure Diagnostics and others to local storage.

For example, I added a ton of trace logging to track down a thread-hanging problem. I found that giving a root-relative file path such as "\ServiceLogs\MyLog.txt" will output to the F: drive on the instance. So I routed all that to the instance filesystem, rather than the Diagnostics tables. You have to remote into each instance to see those logs, but in this circumstance it's a good trade off.


I use Enterprise Library 5.0 Logging Application Block pointing to the Azure Diagnostic Monitor Trace Listener.

Enterprise Library on Windows Azure


While not traditional logging framework the Story framework can really help when you actually want to read your logs, it "makes" you write all logs (and add other relevant information) in context so when you need to read it later you get everything you need.

It also supports persisting the logs to Azure Table storage.

There is more information and samples on this blog post.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜