开发者

Performance analysis strategies

I am assigned to a performance-tuning-debugging-troubleshooting task.

Scenario: a multi-application environment running on several networked machines using databases. OS is Unix, DB is Oracle. Business logic is implemented across applications using synchronous/asynchronous communication. Applications are multi-user with several hundred call center users at peak time. User interfaces are web-based.

Applications are third party, I can get access to developers and source code. I only have the production system and a functional test environment, no load test environment.

Problem: bad performance! I need fast results. Management is going crazy.

I got symptom examples like these: user interface actions taking minutes to complete. Seaching for a customer usually takes 6 seconds but an immediate subsequent search with same parameters may take 6 minutes.

What w开发者_开发技巧ould be your strategy for finding root causes?


If this is an 11th-hour type scenario, and this is a system you're walking up to without prior knowledge, here's how I'd go about it - specific instructions below are for the unix newb, but the general principles are sound for any system triage:

  1. Create a text file with the name of every single one of your production hosts in it. Let's call it prodhosts
  2. Get your public ssh key onto ~/.ssh/authorized_keys on every one of prod_hosts. If you're not familiar with ssh agents and how to make logins everywhere fast, take 10 minutes and read up on it, or use a script that handles it for you.
  3. Check system load on all servers

    for i in `cat prodhosts` ; do echo $i ; ssh $i uptime ; done
    

    High load averages (very generally speaking, more than the number of cores you have) indicate problem servers. Make a note of them - you'll look at them soon.

  4. Check for full disks - these are very common

    for i in `cat prodhosts` ; do echo $i ; ssh $i df -h ; done
    

    Any host that's at or near 100% disk usage is going to be a problem. Make a note of any problem servers you find in this way.

  5. Check for swap activity - swapping is the most common cause of bad performance (and it's usually paired with the above indicator of a high load average).

    for i in `cat prodhosts` ; do echo $i ; ssh $i free -m ; done
    

    That'll tell you how much memory all of your boxes have, and how much they're each swapping. Here's what a healthy system with around 16GB of RAM might look like:

                 total       used       free     shared    buffers     cached
    Mem:         15884      15766        117          0         61      14928
    -/+ buffers/cache:        776      15107
    Swap:        31743          0      31743
    

    It's likely that your problem boxes will have a high number in the used column for Swap. That's the amount of memory your applications are trying to use that your machine doesn't have.

  6. Armed with that information, you should have a better idea of where the bottleneck is in 95% of all systems (the remaining 5% would be slowed down by remote network resources or gremlins). Now you do standard triage. Start at the bottom of the stack - i.e. if you have high load and crappy performance everywhere, start with your database, because it's likely that its problems are cascading out everywhere else (if your DB is humming along fine, obviously look elsewhere first - but always be suspicious of databases when performance is on the line):

    • Database - get a log of all queries being run that take over, say, 400ms, in as large of a sample period as you can afford to take (ideally these logs will already exist, otherwise get them together and let the data collect for an hour or so). Hack together some scripts that normalize the queries and figure out which queries take up the most total time on your system (also be on the lookout for crappy 1-off queries that take way too long and slow everything else down). You'll want to analyze those queries with an explain plan and figure out how to get them to hit indexes better, or figure out how to remove them from your system altogether if possible. Ask your DBA for help if you have one, and use an off-the-shelf query log analyzer if you can.
    • Application - look through the logs and watch out for anything crazy. Apps and logging vary wildly, so this is very system-dependent.
    • Operating System (use this on any box) - look at the output of dmesg on your box - does it have any warnings? Look through the logs in /var/log - see anything interesting? Any logs that are bursting at the seems? Those are your problem points.

After you've done the fast and loose hacking to get the system back to a stable state, sit down and talk to "management" about monitoring, log analysis, and all of the standard tools of the sysadmin trade that should help prevent scenarios like the one you're in from occurring. Read up on Nagios, Munin, rsyslog, etc, etc, or hire someone who can automate your datacenter and its monitoring for you. Also, if the app's third party, talk to them about how they expect you to handle this type of situation - if this is an off-the-shelf product, they should have guidelines for the requirements necessary to run their app successfully. If it's something you hired a random contracting company to build, consider recommending to management that they hire people who know what they're doing.


Check cpu utilization if it is low it seems to be a database issue, analyze the queries and look for sequential scans, maybe only an index is missing.

Check which component idles, there could be some kind of timeout or missing resources.

Anything other depends on the architecture of the application. You definitly need a test environment to setup a decent benchmark, alternativly let the managers (who bought this stuff) pay for 3rd party support.


Run sysinternals file monitor and process monitor to find excessive I/O. Most easily done when peformance drags when they run particular reports or programs. Partner with your Oracle DBA to monitor the database peformance. Partner with the sysadmin to monitor the disks that the Oracle tables reside on. You're looking for poorly-executed queries resulting in full table scans, matrix results, etc.. Have sysadmin/netadmin monitor network saturation.

Copy production data and code to another, isolated test system and measure performance. See where CPU and Disk performance go through the roof.

Note that FileMonitor output is .csv format and will quickly overwhelm Excel. But Excel can treat that .csv as an external datasource and you can connect it to a Pivot Table. Just use the Pivot Table wizard, point to the report file (.txt) and measure the application name, dataset filename, and bytes read/written. You'll quickly find the files that are being hammered with I/O. Sometimes solutions are simple, such as wrapping thousands of database updates with a transaction.


See on which machine(s) CPU load is high — this way you will probably be able to figure out if problem is on the database side or in UI code (the latter is quite unlikely, but still worth to check). Also check if any machine runs low on memory (the way to check this depends on programming language), i.e. if slowdown is caused by constant virtual memory swapping.

Copy production data to testing system and verify if access to the data is slow even without high load. If it is — most likely a poor database design. If it isn't (i.e. if it becomes slow only under load), then things are more complicated. If CPU loads are low, yet heavy system load causes slowdown, then there might be a problem with locks and unnecessary blocking. If CPU loads are high, this probably indicate a suboptimal database design or poor result caching (though normally database should do that itself).

Check in logs or ask developers to log all SQL queries queries and their runtime. If "all" are too many, ask them to log only those that take more than say 3 s to complete. Manually run slow queries and ask Oracle to explain what it is doing.

Manually check database for tables with obviously missing indexes. If there are not many tables you can do usually do this faster than finding what queries are slow.


OK. I've done this, and the basic method I use is this.

Here's an example of the kind of results that are ultimately possible.

That said, that's only the beginning.

There will be multiple problems, and each one you find and fix will improve things significantly, but you will not be done. You have to get most of them.

The biggest problem will be that you will isolate a reason for poor performance, and it will require that one or more of the applications be coded a little (or a lot) differently. You will encounter resistance on the part of the programmers who "own" the code to make those changes. It may very well violate their sense of properly designed code, and that is a tough feeling to overcome.

For one example, I worked on an application with a severe performance problem that was considered company-threatening. As always happens, there are Wild-XX-Guesses as to what the problem is, and people only too willing to invest time and money in those guesses. The real problem was a decision to use XML as a communication format between portions of the application, and most of the time was going into generating and parsing XML, even though the two parts of the application happened to be in the same process and could exchange information directly. To change this required a design change, which was not such a difficult thing to do. The difficult thing was getting the programmer to accept that this part had to be done differently.

In my experience, most serious performance problems are caused by over-general approaches to abstraction and data structure, which has been taught religiously, and which programmers are extremely reluctant to reconsider.

That is the part I haven't figured out how to overcome.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜