开发者

PostgreSQL database size (tablespace size) much bigger then calculated total sum of relations

Hallo all,

I see a very big difference between the actual database size (on the HDD and displayed by pg_database_size() call) and the size, calculated by summing up total relation sizes retrieved by pg_total_relation_size().

The first is 62G and the last is 16G (right the difference of the deleted data from the biggest table)

Here is a simplified query, that can show that difference on my system:

select current_database(),
       pg_size_pretty( sum(total_relation_raw_size)::bigint ) as calculated_database_size,
       pg_size_pretty( pg_database_size(current_database()) ) as database_size   
  from (select pg_total_relation_size(relid) as total_relation_raw_size
          from pg_stat_all_tables -- this includes also system tables shared between databases
         where schemaname != 'pg_toast' 
       ) as stats;

It seems like there is some dangling data there. As this situation appeared, after we d开发者_如何学Pythonumped and full vacuumed lots of unused data from that DB.

P.S.: I suppose, that it was a database corruption of some sort... The only way to recover from this situation was to switch to the Hot-Standby database...


LOBs are a very valid concern as BobG writes, since they are not deleted when the rows of your application table (containing the OIDs) get deleted.

These will NOT be deleted by the VACUUM process automatically, only you have run VACUUMLO on them.

Vacuumlo will delete all of the unreferenced LOBs from the database.

Example call:

vacuumlo -U postgres -W -v <database_name>

(I only included the -v to make vacuumlo a bit more verbose so that you see how many LOBs it removes)

After vacuumlo has deleted the LOBs, you can run VACUUM FULL (or let the auto-vacuum process run).


Do you have unused LOBs?

If you have something like this:

CREATE TABLE bigobjects (
    id BIGINT NOT NULL PRIMARY KEY,
    filename VARCHAR(255) NOT NULL,
    filecontents OID NOT NULL
);

followed by:

\lo_import '/tmp/bigfile'
11357
INSERT INTO bigobjects VALUES (1, 'bigfile', 11357);
TRUNCATE TABLE bigobjects;

You'll still have the LOB (id 11357) in the database.

You can check the pg_catalog.pg_largeobject system catalog table for all the large objects in your database (recommend SELECT DISTINCT loid FROM pg_catalog.pg_largeobject unless you want to see all your LOB data as octal.)

If you clean out all your unused LOBs and do a VACUUM FULL, you should see a hefty reduction in storage. I just tried this on a personal dev database I've been using and saw a reduction in size from 200MB down to 10MB (as reported by pg_database_size(current_database()).)


As this situation appeared, after we dumped and full vacuumed lots of unused data from that DB.

I had similar experience: 3GB db with lots of dynamic data that went to 20GB for a month or so. Manually deleting / vacuuming the problematic tables doesn't seamed to have effect .. And then we just did a final

VACUUM FULL ANALYZE 

On the WHOLE DB ... and it dropped half the size.

It took 4hours so be careful with that.


Your query is specifically screening out pg_toast tables, which can be big. See if getting rid of that where schemaname != 'pg_toast' gets you a more accurate answer.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜