开发者

Database table design optimise for high volume data

If i were to organise data on tourism and hotel for the continent of Australia and south America can you let me know a best way to do it. Like if i organise the tables thus:

  • continents
  • countries
  • states
  • zipcodes
  • cities
  • destinations
  • hotels
  • lnkhotelszipcodes (in cas开发者_Python百科e a hotel exists in multiple places)
  • lnkhotelscities

It will be search oriented with countries, states, zipcode, cities and then amenities or features of hotels and tourist destinations


In a nutshell - create one table for one entity.

I think you should create an ERD diagram (http://en.wikipedia.org/wiki/Entity-relationship_model), and map it to the tables in the following way: one entity - one table. As a result you will get a data in the 3-rd normal form. You should proceed this way until facing some performance problems. If they will be serious - start de normalizing. I do not think that building something not normalized is justified in this case.


I would suggest you model it using star schema. have a region dimension. would help in slicing and dicing of data.

Star schema is used in data-warehousing based applications. it allows users to analyse data at various levels of granularity. in your case, you could get counts of hotels at various levels of granularity. the internet is fraught with a lot of info on star schema. google "star schema". you may also read on datawarehousing.

This could be a good starting point: http://en.wikipedia.org/wiki/Star_schema


how would a hotel exist in multiple places? also, is this really one large table, or do you mean a large database?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜