开发者

Disadvantages of Sql Cursor

I was studying cursor and I read somewhere that. Each time you fetch a row from the cursor, it results in a network round trip whereas normal select query makes only one round trip however large the resultset is.

Can anyone explain what does that means? And what does network round trip and one round trip means in detail with some example. And when we use cursor and when we us开发者_如何学运维e while loop?


Unfortunately, that reference is incorrect.

A "normal SELECT" creates a cursor that the client fetch from. The mechanics are exactly the same as if you open and return a SYS_REFCURSOR (or any other mechanism for opening a cursor). In both cases, the client will fetch a number of rows over the network every time it requests data from the database. The client can control the number of rows that are fetched each time-- it would be exceptionally rare for the client to fetch 1 row or to fetch all the rows from a cursor in a single network round-trip.

What actually happens when a client application fetches from a cursor (no matter how the cursor is opened), the client application sends a request over the network for N rows (again, the client controls the value of N). The database sends the next N rows back to the client (generally, Oracle has to continue executing the query in order to determine the next N rows because Oracle does not generally materialize an entire result set). The client application does something with those N rows and then sends another request over the network for the next N rows and the pattern repeats.


In virtually all database systems, the application that uses the data; and the DBMS that is responsible for storing and searching the data; live on separate machines. They talk to each other over a network. Even when they are on the same machine, there is still effectively a network connection.

This matters because there is some time between when an application decides that it's ready to read data, when that request arrives over the network at the database server, when the database server actually gets the response for that, and when the response finally arrives over the network on the application side.

When you do a query for a whole set of data, you only pay this cost once; Although it may seem wasteful; in fact it's much more efficient to put the burden of holding on to all of the data on the application, because it's usually easier to give more resources to an application than to do the same on a database server.

When your application only fetches data one row at a time, then the cost of the round trip between application and database is paid once per row; If you want to show the titles of 100 blog posts, then you're paying the cost of 100 round trips to the database for that one report. Whats worse is that the database server has to some how keep track of the partially completed result set. That usually means that the resources that could be used for querying data for another request are instead being retained by an application that hasn't happened to ask for all of the data it originally asked for.

The basic rule is to talk to the database only when you absolutely have to, and to make the interaction as short as possible; This means you only ask for the data you really need (have the database do as much filtering as possible, instead of doing it in the application) and accept all of the data as quickly as possible, so that the database can move on to another task.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜