What are the disadvantages of choosing higher timestamp precision in Oracle?
Oracle allows to specify precision of TIMESTAMP
type in a table - the number of d开发者_StackOverflow社区igits in the fractional part of the SECOND
datetime field. Are there any disadvantages of specifying maximal precision TIMESTAMP(9)
?
One reason I could think is that this information may be used for prettier output by Oracle tools.
Maximum of 9 digits suggests that the field is stored as a 4 byte integer so it should not have any performance implications, please correct if I'm wrong here.
There are no disadvantages, use timestamp(9) if it makes sense.
Timestamp(9) and timestamp(1) use the same amount of space, and their performance is identical. I could only find one case where there was a performance difference, and in that case timestamp(9) was actually faster than timestamp(1).
(I'll spare you the many lines of boring code inserting into timestamp(1) and timestamp(9) columns and comparing different operations on them.)
This demonstrates that they use the same amount of space (inserting many values and comparing dba_segments):
--Create tables with timestamps and populate them with the same data (with different precision)
--Set initial and next to a low value so we can closely check the segment size)
create table timestamp1 (t1 timestamp(1), t2 timestamp(1), t3 timestamp(1), t4 timestamp(1), t5 timestamp(1))
storage(initial 65536 next 65536);
insert into timestamp1
select current_timestamp(1), current_timestamp(1), current_timestamp(1), current_timestamp(1), current_timestamp(1)
from dual connect by level <= 100000;
create table timestamp9 (t1 timestamp(9), t2 timestamp(9), t3 timestamp(9), t4 timestamp(9), t5 timestamp(9))
storage(initial 65536 next 65536);
insert into timestamp9
select current_timestamp(9), current_timestamp(9), current_timestamp(9), current_timestamp(9), current_timestamp(9)
from dual connect by level <= 100000;
--Segment size is identical
select segment_name, bytes from dba_segments where segment_name in ('TIMESTAMP1', 'TIMESTAMP9');
--SEGMENT_NAME BYTES
--TIMESTAMP1 8388608
--TIMESTAMP9 8388608
This is where timestamp(9) is faster, when using current_timestamp, which you'll probably need to use at some point to generate the data. But we're only talking about the difference between about 0.175 and 0.25 seconds on my slow desktop to generate 100K timestamps. I'm not sure why timestamp(9) is faster, maybe timestamps are always generated as timestamp(9) and then rounded to other precisions?
--current_timestamp(9) is slightly faster than current_timestamp(1)
select count(*) from
(
select *
from dual
--where current_timestamp(9) = current_timestamp(9)
where current_timestamp(1) = current_timestamp(1)
connect by level <= 100000
);
EDIT: The performance difference exists in 10g but not 11g.
The problem is performance. You must trade it with precision. Smaller numbers are readen and written in fewer CPU instruction. A CPU instruction takes less than a nanosecond, but if your server serves millions of transactions you might find some performance decrease, and this suggests you to adopt less precision, or even no precision (round all timestamps to the seconds is quite acceptable in most scenario, even in banking).
But if you, for some reason, ie. real-time system logging, need more precision, you are forced to use a higher precision and thus get a performance decrease. If your server doesn't process a great number of tps you have almost no performance impact, but if you don't need precision you're wasting memory.
Hope to have been of help. If you want to share with us your DB requirements, we might help you choose your best compromise.
The difference is not in the technical use of the Timestamp data type, but the application. FERC and NERC often require a certain precision when used in applications labeled critical infrastructure
and as such they will use the highest precision made available.
Of course, making the suits happy with their sequence of events records often requires doing more than laid out by CIP-002 through CIP-009
No disadvantages if you always going to use the data as "date/timestamp" datatype inside Oracle and in the middle tier, however you have to see how your entire application/solution is using that column.
- Are you truncating the data before you display it ?
- Is it a requirement for compliance and it is mainly read ?
- Are you converting that column to a string to compare it to another column ?
- is it a requirement for auditing or for order capturing ?
Don't worry too much about the reads and writes performance differences, there are negligible, evaluate your overall requirements as a on whole from storage to UI.
精彩评论