开发者

Equivalent of SQL Server trace for Oracle 10g?

I've got a bad code duplication problem where we cannot tell which versions of packages/procedures are being used. i.e. I have different packages with procedures of the same name and I need to be sure which packages are 开发者_JAVA百科not actually being called.

I'm trying to eliminate procedures/packages of the same name which are no longer being called. I'd like to run the trace for a few weeks, combine the data and query the metadata against it and find candidates for further investigation and deactivation.

I'm looking for the equivalent of SQL Server's sp_trace_create with sp_trace_setevent 42 to track the SP:Starting event - this would allow me to see all the stored procedures being called and then compare that to the entire inventory of stored procedures to see which ones should be investigated first for obsolescence. I have used DBMS_MONITOR.DATABASE_TRACE_ENABLE to generate trace files, but I haven't found anything which explains how to use TKPROF to parse those files for anything but performance - nor do I even care about the statements themselves, just which routines are being used.


Dont think ILO will help (does it do more than just set the module and action and log the time somewhere ?) as Cade Roux would have to modify every package and function in the database.

One option is to switch on tracing at the database,

 ALTER SYSTEM SET sql_trace = true SCOPE=MEMORY;

and use a script command to get a unique list of all the package names executed although WARNING - this will generate a lot of output and fill up disk space very rapidly - you may want to create a file of the procedures run each day and then remove the trc files and then combine the results all together to identify which ones are used.

Also you need to ensure there is no limit on the trace file size which could prevent it logging everything and also ensure you have plenty of disk space and as you are not interested in the runtimes, you can also turn timed_statistics off to reduce the amount of logging.

 alter system set max_dump_file_size=unlimited;
 alter system set timed_statistics=false;

If i look at the raw trace files created from Oracle tracing, each application specific package call procedure call appears like this

=====================
PARSING IN CURSOR #2 len=69 dep=0 uid=102 oct=47 lid=102 tim=1316845390611021 hv=273704950   
 ad='b0d4c728' sqlid='01qnrr4850tzq'
BEGIN REIM_MATCH_SQL.INIT_SUMMARY_MATCH(:1, :2, :3, :4, :5, :6); END;
END OF STMT
PARSE #2:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,plh=0,tim=1316845390611021

so you can grep for the list of package calls like this in the directory (you might have to tweak the output a little depending on the OS you are using in case the tkprof output is different)

find . -name "*.trc" -exec grep "BEGIN " {} \; | cut -d" " -f4 | cut -d"(" -f1 | sort -u > ~/called_procedures.txt

This doesnt work with functions as they can be called in the middle of SQL statements but i think for procedures, it should be ok. This will produce a file in your home directory containing the unique list of procedure calls that have run since you turned on tracing.

I did a run of this on a directory containing 9818 trace files totalling 19GB of diskspace and it took 10 minutes on a test box running Oracle enterprise Linux 5 with 2 cores and 12GB memory - these trace files are just from repeated test runs of one program so you can imagine how quickly these will be created if you are generating for everything in a production machine.

Then you can get a list of all pacakges/procedures in the database from sqlplus for the particular schema you are interested in

set heading off
set trimspool on
set pagesize 0
set feedback off
spool all_procedures.txt 
SELECT DISTINCT p.object_name||'.'||p.procedure_name
  FROM all_procedures p
  JOIN all_objects o ON (o.owner = p.owner AND o.object_name = p.object_name AND o.object_type =
 where p.owner='&owner'
 order by 1
spool off

and finally do a diff of the two to get a list of candidates

diff all_procedures.txt called_procedures.txt

Dont forget to switch off tracing when you are done

ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;

Also, you need to be careful about procedures which may only be run at certain periods like month end or year end - they may appear not to be being used but you cant be sure unless every business process is completed during the time you are monitoring.

Hope that helps.


A running Oracle database keeps track of execution counts for all database objects. It can be accessed through the v$db_object_cache view.

select * from v$db_object_cache
where type in ('PROCEDURE');

The ones you don't need would either not show in the list at all (Never loaded) or will be listed with 0 executions.


The Oracle Trace functionality has been removed from Oracle 10g. Instead the SQL Trace and TKPROF functionality should be used.

In multi-tier environments where statements are passed to different sessions by the application server it can become difficult to trace an individual process from start to finish. To solve this problem Oracle have introduced End to End Application Tracing which allows a client process to be identified via the client identifier rather than the typical session id. Each piece of trace information is linked to the following information.

  • Client Identifier - Specifies the "real" end user. Set using the DBMS_SESSION.SET_IDENTIFIER procedure.
  • Service - Specifies a group of related applications. Created using the DBMS_SERVICE.CREATE_SERVICE procedure.
  • Module - Specifies a functional area or feature of an application. Set using the DBMS_APPLICATION_INFO.SET_MODULE procedure.
  • Action - Specifies the current action (INSERT, UPDATE, DELETE etc.) within the current module. Set using the DBMS_APPLICATION_INFO.SET_ACTION procedure.

Reference:

http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php#tracing_enhancements


You may be able to automate this by calling DBMS_PROFILER from a schema LOGON and LOGOFF trigger.

--Run this script to install the profiler objects for the user
@...\RDBMS\ADMIN\proftab.sql


--Create some objects for testing
create or replace function test1 return varchar2 is
begin return 'test1'; end;
/

create or replace function test2 return varchar2 is
begin return test1(); end;
/

create or replace package test3 is
    function test4 return varchar2;
    function test5 return varchar2;
end;
/

create or replace package body test3 is
    function test4 return varchar2 is begin return test2(); end;
    function test5 return varchar2 is begin return 'test5'; end;
end;
/

--Create the LOGON trigger to start profiling
create or replace trigger profiler_logon after logon on test_schema.schema
begin
    dbms_profiler.start_profiler(run_comment => sysdate);
end;
/


--Create the LOGOFF trigger to write the profiling data
create or replace trigger profiler_logoff before logoff on test_schema.schema
begin
    dbms_profiler.stop_profiler();
    dbms_profiler.flush_data();
end;
/


--Logon, run this:
select test3.test4() from dual;
select test3.test5() from dual;
--Logoff, logon, run this:
select test1() from dual;
--Logoff, logon to see results



--The data is stored in these tables.
select * from plsql_profiler_runs;
select * from plsql_profiler_units;
select * from plsql_profiler_data;


--Use a query like this to find the most common objects.
--But total_occur is both too low and too high:
--  It records at the object level, test4 and test5 are combined.
--  It seems to double many counts - maybe an extra call for initializing?
select unit_type, unit_owner, unit_name, sum(total_occur) total_occur
from plsql_profiler_units
join
(
    --Get max number of times any line number was called.
    select runid, unit_number, max(total_occur) total_occur
    from plsql_profiler_data
    where total_occur > 0
    group by runid, unit_number
) profiler_data
    on plsql_profiler_units.unit_number = profiler_data.unit_number
    and plsql_profiler_units.runid = profiler_data.runid
where plsql_profiler_units.unit_type <> 'ANONYMOUS BLOCK'
group by unit_type, unit_owner, unit_name
order by total_occur desc, unit_type, unit_owner, unit_name;


UNIT_TYPE       UNIT_OWNER      UNIT_NAME   TOTAL_OCCUR
---------       ----------      ---------   -----------
FUNCTION        TEST_SCHEMA     TEST1       4
FUNCTION        TEST_SCHEMA     TEST2       2
PACKAGE BODY    TEST_SCHEMA     TEST3       2

But there are lots of issues here:

  1. Performance - Does profling add a lot of overhead? I've never used it before like this, I don't think it was intended for long-term production use. You may need to periodically delete or truncate the data. Or maybe modify some of the tables in proftab.sql; e.g. remove some of the indexes and foreign keys. (Although I haven't tried this, it may make things worse!)

  2. Silent failure - If something can't be profiled, it just won't show up. You'll need to be very careful that it's working properly for all schemas and objects. From the reference: "The profiler only gathers data for units for which a user has CREATE privilege; you cannot use the package to profile units for which EXECUTE ONLY access has been granted. In general, if a user can debug a unit, the same user can profile it. However, a unit can be profiled whether or not it has been compiled DEBUG. Oracle advises that modules that are being profiled should be compiled DEBUG, since this provides additional information about the unit in the database."

  3. There's no easy way to find which function or procedure inside a package was used. To get this information you'll need to compare the line numbers from dba_source.line to plsql_profiler_data.line#. Unfortunately, those queries can be insanely difficult because you need to parse PL/SQL. Depending on how clean and simple your code is, it might be easier to manually go through the code and create a table with the relevant function and procedure start and end line numbers.

  4. Session problems - Do logoff triggers sometimes fail? If the connection is lost, or there's a serious database error, I'd guess there won't be a logoff trigger fired. In that case the profiler information would not be written to the tables.

  5. You'll need to set this up for all users, and combine their data. Or possibly you can replace the tables with synonyms to a common tables?

  6. You'll have to filter through a lot of junk, like ANONYMOUS BLOCKS and other weird entries.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜