开发者

Why does Oracle varchar2 have a mandatory size as a definition parameter?

I want to know why Oracle needs the size parameter in the definition of the VARCHAR2.

I think that开发者_开发知识库 is for constraint. Would it be a better option that oracle takes this parameter as an optional like NUMBER dataType?

I often have problems resizing old tables to larger sizes, because sometimes a value is bigger than the size definition of the VARCHAR2 column.

It's the same to define a type of VARCHAR2(10) or VARCHAR2(1000).

I guess, it's an unnecessary constraint. If not, do you know of a real case when this constraint resulted in something useful? And why no such declaration in NUMBER type ?


It's the same to define a type of varchar2(10) or varchar2(1000).

No, it is not the same thing at all.

  1. The length of the column is useful metadata for developers building screens.
  2. Similarly automatic query tools like TOAD and SQL Developer use the length of the column when they render results.
  3. The database uses the length of a variable when allocating memory for PL/SQL collections. As that memory comes out of the PGA supersizing the variable declaration can lead to programs failing because the server has run out of memory.
  4. There are similar issues with the declaration of single variables in PL/SQL programs, it is just that collections tend to multiply the problem.
  5. Supersized columns create problems for compound indexes. The following is on a database with 8K blocks

....

SQL> create table t23 (col1 varchar2(4000), col2 varchar2(4000))
  2  /

Table created.

SQL> create index t23_i on t23(col1,col2)
  2  /
create index t23_i on t23(col1,col2)
                      *
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded


SQL>

But above all else, columns sizes are a form of error checking. If the column is supposed to be ten characters long and some autonomic process is trying to load a thousand characters then something is wrong. The process ought to fail, so we can investigate why we are loading duff data. The alternative is a database full of garbage, and if that is what was wanted we should just have given everybody Excel and have done with it.

It is true that changing the column size when it turns out we have underestimated can be tiresome. But it doesn't happen very often, and we can mitigate a lot of the pain by using %TYPE and SUBTYPE declarations in our PL/SQL instead of hard-coding variable lengths.


"why no such declaration in NUMBER type"

Numbers are different. For a start, the maximum size of a number is much smaller than the text equivalent (38 digits of guaranteed precision).

But the key difference is that Oracle stores numeric values in scientific notation so there is not a straightforward relationship between the arithmetical size of the number and the storage space it consumes.

SQL> select vsize(123456789012345678901) n1
  2         , vsize(999999999999999999999999999999) n2
  3         , vsize(0.000000000000000000001) n3
  4         , vsize(1000000000000000000000000) n4
  5  from dual
  6  /

        N1         N2         N3         N4
---------- ---------- ---------- ----------
        12         16          2          2

SQL> 

Nevertheless, it remains good practice to specify scale and precision wherever possible, especially when we are dealing with integers, say, or money.


I think it's important to remember the historical context in which relational databases were developed. At the time they were being developed (late 70's - early 80's) commonly available computers were much smaller (in terms of memory and disk space) and less powerful (in terms of CPU) than we have now, and managing these resources was necessarily a compelling concern. COBOL was the common language of business computing (and is still widely used), and object-oriented languages such as Smalltalk and C++ were unknown, for all practical purposes. At that time it was expected that programs would declare precisely how much storage they would need for each data element, e.g. 10 bytes for a string, 2 bytes for a short integer, 4 bytes for a float, etc, and so this style of declaration was used by the then-newly-developed relational databases. More to the point, the assumption was made that each data element would declare (implicitly or explicitly) the amount of storage it required, and this was coded into the relational engines at a very fundamental level.

Now, over time this requirement has relaxed somewhat, at least as far as storing the data on disk goes. I believe that in Oracle the NUMBER data type will flexibly allocate space so that only the minimum amount of space needed to store its value is actually used, and that VARCHAR2 columns will only use enough disk space to store the actual data without storing trailing blanks, although you still need to declare the maximum amount of storage required for a VARCHAR2.

You might take a look at the SYS.STANDARD package to get an idea of how to declare VARCHAR2 subtypes. For example, if you wanted your own 'string' type which you could use without tacking on a length specification you might try:

SUBTYPE MY_STRING IS VARCHAR2(4000);

However, be wary of this if you're going to index the column in question (as pointed out earlier by @APC).

I agree that I'd rather just be able to declare a STRING (which is, BTW, defined in SYS.STANDARD as a subtype of VARCHAR2) without having to declare a length, but that's just not how Oracle works, and as I'm not about to start writing my own relational database (I have my own windmills at which to tilt, thank you :-) I'll just go along with the status quo.

I hope this helps.


Why not have every column in every database table be a CLOB? That way you don't have to worry about maximum lengths...

But, seriously:

Data type length constraints are there for the same reason as any constraints: they reduce the amount of error checking you need to sprinkle through all your application code, by ensuring that any data successfully stored in the table adheres to the constraints you've defined.


Even though it doesn't allocate a set number of bytes on disk like a char field would, there are still decent reasons for sizing:

  • Memory allocation on data readers (based on max row size)
  • Indexing a large column brings block sizes into play
  • Etc...

I'm sure there are more reasons someone else can think of, but those are ones I've seen in a past project where someone chose to varchar2(4000) everything.


From the point of view of extracting information, it is very useful to know how big the field is. For example, if you have to print the address on an envelope or display it on a screen, you want to know how big the field has to be.

Or buy VERY large envelopes.


There's possible performance impact: in MySQL, temporary tables and MEMORY tables store a VARCHAR column as a fixed-length column, padded out to its maximum length.

If you design VARCHAR columns much larger than the greatest size you need, you will consume more memory than you have to. This affects cache efficiency, sorting speed, etc.

So you give the max length which is under your string come. like if you max length of character 10 so don't give his length 100 or more.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜