开发者

Table compression requirements in Oracle

I have a Tablespace with total Size 600 MB(this is my test machine), and free space of 110 MB. I was trying to compress a table that's size is 120 MB. When execute the commands:

alter table DOWNLOADSESSIONLOG move compress;

I'm getting error saying:

(Error): ORA-01659: unable to allocate MINEXTENTS beyond 16 in tablespace WAP.

I was searching and all are saying to increase the Tablespace, but I wanted to know how much space I need to give, because I don't want to gave some 200 /300 MB extra space. I wanted to test this in my Test Machine and wanted to implement on Live system that have 60 GB tablespace with one 47 GB table. I wanted to compress 47 GB table on live system, before that I wanted to test it in test environment. Is there any calculation how much space we need to give; otherwise on li开发者_StackOverflow社区ve system I need to give Lots of space unnecessarily and its difficult.

Appreciate somebody can give some ideas.


I would create a new throw-away tablespace in your test environment that has the same characteristics as your target tablespace in the production environment and then move/compress your table into that. This will give you the best estimate of how much additional space will be necessary. You can move the table back to the original tablespace and drop the new tablespace once you have this number.

Remember that you'll need at least (original size) + (compressed size) available in the tablespace for the duration of the move.


The key thing about compression is that it works by removing duplicate values in each block. So your test table needs to have a representative spread of data.

Two extreme tables ...

SQL> create table totally_similar
  2      ( txt varchar2(1000) )
  3  /

Table created.

SQL> insert into totally_similar
  2  select rpad('a', 1000, 'a')
  3  from dual connect by level <= 1000
  4  /

1000 rows created.

SQL> create table totally_different
  2      ( txt varchar2(1000) )
  3  /

Table created.

SQL>

Before we compress let's just check the table sizes...

SQL> insert into totally_different
  2  select dbms_random.string('A',1000)
  3  from dual connect by level <= 1000
  4  /

1000 rows created.

SQL> select segment_name
  2         , sum(bytes)
  3         , sum(blocks)
  4  from user_segments
  5  where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
  6  group by segment_name
  7  /

SEGMENT_NAME         SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR         2097152         256
TOTALLY_DIFFERENT       2097152         256

SQL>

If we compress them we get two radically different results ...

SQL> alter table totally_similar move compress
  2  /

Table altered.

SQL> alter table totally_different move compress
  2  /

Table altered.

SQL> select segment_name
  2         , sum(bytes)
  3         , sum(blocks)
  4  from user_segments
  5  where segment_name in ('TOTALLY_SIMILAR', 'TOTALLY_DIFFERENT')
  6  group by segment_name
  7  /

SEGMENT_NAME         SUM(BYTES) SUM(BLOCKS)
-------------------- ---------- -----------
TOTALLY_SIMILAR           65536           8
TOTALLY_DIFFERENT       2097152         256

SQL>

Note that TOTALLY_SIMILAR is eight blocks big, even though every single row was the same. So you need to understand the distribution of your data before you can calculate the compression ratio. The Oracle documentation has this to say:

The compression factor that can be achieved depends on the cardinality of a specific column or column pairs (representing the likelihood of column value repetitions) and on the average row length of those columns. Oracle table compression not only compresses duplicate values of a single column but tries to use multi-column value pairs whenever possible.

Its advice when it comes to estimation of the return is that a sample table of 1000 blocks of the target table should give you a good enough prediction (although more blocks give a more accurate forecast). It is hard to tell without knowing your blocksize, but it seems likely that your TEST table is much bigger than it needs to be. The important thing is whether the data in the test table is representative of your target table. So, did you create it using an export or a sample from the target table, e.g.

create table test_table as select * from big_table sample block (1)
/

You will need to adjust the percentage in the SAMPLE() clause to ensure you get at least 1000 blocks.

edit

In most cases compression should actually speed up data retrieval but YMMV. The cost of compression is paid when inserting or updating the data. How much that tax is and whether you can do anything to avoid it rather depends on the profile of your table.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜