开发者

100+ tables JOIN

I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related.开发者_如何学运维

It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary.

So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join.

500K-1M rows should be the expected size of the tables.

Cheers


As a rule of thumb, anymore than 25 joins might be a performance problem. I try to keep joins below 10-15. It depends on the database activity and number of concurrent users, and the ratio of reads/writes.

Suggest you look at indexed views.

With any well tuned database, 'good' indexes for the query workload are the key .


They'd most likely perform terribly, unless you had a very small number of rows per table.

Go for a wider table, but normalize it properly. My guess is that if you normalize your data properly, you will have a slightly more sane design.


What you describe is similar to the implementation of column-oriented database (wikipedia). The data is stored in "column major" format which slows down adding each row, but is much faster for querying in the case of a where clause which restricts the returned rowset.

Why is it that you would rather split the rows? Is it that you measure the data elements for each row at different times? Or is it that the query result of a row would be very large?

Since first posting this, you answered me below that your reason for desiring a split of the table is that you usually only work with a subset of the data.

In that case, splitting the table can help your performance (amount of runtime consumed by the query) some amount. This may be an important factor in your wanting to work with less data -- in the case where your database engine runs slowly with large rows.

If performance is not a concern, rather than using SQL JOINs, it might serve you to explicitly list the columns you wish to retrieve in each query. For example, if you only wish to retrieve width, height, and length for a row, you could use: SELECT width, height, length FROM datatable; rather than SELECT * FROM datatable; and accomplish the same improvement of getting less data returned. The SQL statements used would probably be shorter than the alternative join statements we were considering.


There's no way to better organise the tables? For example a "DataPointTypes" and "DataPointValues" table?

For example (and I don't know your particular circumstances) if all of your tables are like "WebsiteDataPoints (WebsitePage, Day, Visits)" "StoreDataPoints (Branch, Week, Sales)" etc. you could instead have

DataPointSources(Name) 
 (with data: Website,Store)

DataPointTypes(SourceId, ColumnName) 
 (with data: (Website, WebsitePage), (Website, Day), (Store, Branch), (Store, Sales) etc.)

DataPointEntry(Id, Timestamp)

DataPointValues (EntryId, Value(as varchar probably)) 
 (with data: (1, Website-WebsitePage, 'pages.php'), (2, Store-Branch, 'MainStore'), (1, Website-Day, '12/03/1980'), (2, Store-Sales '35') etc.)

In this way each table becomes a source, each column becomes a type, each row becomes an entry, and each cell becomes a value.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜