开发者

Accessing DB2 through StarSQL - Efficiency. Output Parameters or Select Query

I'm working on some code that queries an old DB2 database running on a nice old MainFrame. The Query has to be written in StarSQL The goal of this is to drastically red开发者_Go百科uce the MIP's usage of the current query that is being passed in via Command.Text.

To those that do not know, you get charged (a LOT!) based on CPU Usage (MIP) on mainframes, thus you want things to run as efficiently as possible. You don't really want to say "Select * From TableA" and pass that as a CommandType.Text to the Database because it's going to need to compile that statement and then return the results. You'd need to save that as a procedure (which is already compiled) and strip out the * for the exact columns you want.

So, my question. I cannot answer this myself because our MainFrame guru is on vacation...

I have a procedure that returns ~30 columns. Would it be more efficient to return these through a Select query, or by returning them as output parameters. This is through a Stored Procedure.

I'm not worried about length of C# code, but efficiency on that blasted Mainframe.

I need to take in account things like:

SELECT PHNS.CLNT_INTERNL_KY as CLNT_INTRNL_KY

Uses extra CPU usage to apply that column name to that column, but would it be more efficient to save that as a output parameter using cursors?

If you would like any other information let me know.

Cheers,

(There "could" be a starSQL Tag on the tags, but alas I'm not 1500 points...)


Many query APIs do not permit the use of stored procedures, but if you have that option, using a stored procedure can save some CPU time by parsing and optimizing the query once at compile time rather than during every execution. If not, you may still benefit from the caching that takes place for dynamic SQL statements. The access plans for dynamic statements are temporarily stored in the package cache, so if the same byte-for-byte identical statement (including parameter markers and literal values) is encountered soon enough, DB2 will reuse the access plan from the package cache instead of optimizing it from scratch all over again.

Using stored procedures can save a significant amount of compile time for statements that are run very frequently with different literal values. In cases where the input parameters are optional or can vary considerably (such as flexible searches), a stored procedure could produce an undesirable access plan at compile time because it doesn't know which parameters will be populated at runtime. In those situations, the stored procedure may need to re-optimize the query at runtime through a REOPT policy, but that approach obviously takes away the savings of precompilation.

I'd recommend using DB2's EXPLAIN facility to determine where the real costs are in your query workload (compilation vs. scanning vs. sorting). If a query scans a significant number of rows, the CPU costs of evaluating each row can quickly surpass the expense of optimizing a dynamic query. Queries that issue SELECT * often prevent the optimizer from exploiting an index that could satisfy the same query with fewer I/O operations. Filter and join predicates in the WHERE clause (or the lack of them) can also prevent the optimizer from selecting an index.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜