开发者

Is there a limitation on the number of tables a PostgreSQL database can have?

I have created a database in PostgreSQL, let's call it testdb.

I have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.

Now, I have Python code where I want to dynamically create and remove "sets" of these 3 tables to my database with a unique identifier in the table name distinguishing different "sets" from each other, e.g.

Set 1

testdb.aaa_table_one

testdb.aaa_table_two

testdb.aaa_table_three

Set 2

testdb.bbb_table_one

testdb.bbb_table_two

testdb.bbb_table_three

The reason I want to do 开发者_高级运维it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.

I want to know, a few things

  1. Does PostgreSQL limit the number of tables per database?
  2. What is the effect on performance, if any, of having a large number of tables in 1 database?
  3. What is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables.


PostgreSQL doesn't have many limits, your hardware is much more limited, that's where you encounter most problems. http://www.postgresql.org/about/

You can have 2^32 tables in a single database, just over 4 billion.


  1. PostgreSQL doesn't impose a direct limit on this, your OS does (it depends on maximum directory size)
  2. This may depend on your OS as well. Some filesystems get slower with large directories.
  3. PostgreSQL won't be able to optimize queries if they're across different tables. So using less tables (or a single table) should be more efficient


If your data were not related, I think your tables could be in different schema, and then you would use SET search_path TO schema1, public for example, this way you wouldn't have to dynamically generate table names in your queries. I am planning to try this structure on a large database which stores logs and other tracking information.

You can also change your tablespace if your os has a limit or suffers from large directory size.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜