Oracle 11g - AWR report Analysis help
I am a newbie to Oracle and I created an AWR report from the enterprise Manager. Here is a small snapshot of the report.
Logons: 0.01 per second - 0.02 per transaction
In-memory Sort % 100.00 Executes 19.0 per second - 27.7 per transaction % SQL with execution >1 Begin: 97.45 - 开发者_高级运维End: 94.41 Buffer Cache: Begin: 240M - End: 240M Rollbacks 0.2 per second - 0.2 per transaction Buffer Hit % 99.98 Shared Pool Size: Begin: 1,024M - End: 1,024MCan you guys let me know what does each of the terms like 'logons', execute, etc stand for and how they can be used to determine database performance?
Sorry if this question looks stupid, but I couldn't find an explanation anywhere on the net.
Logons are pretty simple - how many times a new user / session has been logged onto the system. Generally you don't want lots of logons per second as it is a pretty slow operation. If you do, then you probably should look at connection pooling.
Sorts - often you want data sorted (date, alphabetically). Small data sets can be sorted in memory. Larger ones can spill to disk, which is slower. If you are doing all your sorts in memory, that doesn't suggest a problem.
Executes - SQL generally goes through a PARSE, BIND, EXECUTE, FETCH. You may have multiple fetches per execution (fetch first 10 rows, next 10 etc). Equally some SQLs don't have a fetch (eg an insert). A transaction consists of a numbers of SQLs. If you have 20-30 SQLs per transaction, then you got some reasonable complexity. Not every statement is an isolated transaction in its own right. Executes per second is more fundamental. See my closing comment.
% SQL with execution > 1 - You can have multiple binds&executes per parse (which is a good thing, as parsing can be expensive). Most of your SQLs get multiple executions, so good.
Buffer Cache - Amount of memory for copies of data blocks. There's no 'good' or 'bad', as it depends on the memory on the server.
Rollbacks - 0.2 per transaction is....odd. It suggests that 20% of your transactions are rolled back rather than committed. Might be nothing to worry about, it might be just the way a back or cancel button works. Unless there are lots of errors being thrown around forcing rollbacks, it isn't a database problem but just the way the application works.
Buffer Hit % - Percentage of data block reads coming direct from memory without needing to go to disk. Reading from memory is faster than disk so high is 'good' (especially for OLTP apps - data warehouses generally deal with more data than can fit in memory). But don't get excited about ratios. If you can reduce phyiscal IOs (reads from disk) good, but there's no benefit in generating additional reads from blocks in memory just to up the ratio.
Shared Pool Size - Again, it is a measurement of memory.
======================================================================== Ultimately, none of these are great for measuring database performance. What matters is if the application users (or developers or managers) are saying that their programs are or are not meeting their performance specifications.
Executes per second is an important measure, but only against your required workload. If users want to do 50 executions per second, and are only doing 20 and are working overtime as a result, then you have an issue. If they only need to do 10 a second and they spend half the day on Youtube, you are doing fine.
Equally, if that measurement is for eight hours of a working day, it could be a very different story if it covers 24 hours where everyone is home for 16 of them.
精彩评论