开发者

high load on mysql DB how to avoid?

I have a table contain the city around the worlds it contain more than 70,000 cities.

and also have auto suggest input in my home page - which used intensively in my home page-, that make a sql query (like search) for each input in the input (after the second letter)..

so i afraid from that heavily load,,...,, so I looking for any solution or technique can hel开发者_如何学Cp in such situation .


Cache the table, preferably in memory. 70.000 cities is not that much data. If each city takes up 50 bytes, that's only 70000 * 50 / (1024 ^ 2) = 3MByte. And after all, a list of cities doesn't change that fast.

If you are using AJAX calls exclusively, you could cache the data for every combination of the first two letters in JSON. Assuming a Latin-like alphabet, that would be around 680 combinations. Save each of those to a text file in JSON format, and have jQuery access the text files directly.


Create an index on the city 'names' to begin with. This speeds up queries that look like:

SELECT name FROM cities WHERE name LIKE 'ka%'

Also try making your auto complete form a little 'lazy'. The more letters a user enters, lesser the number of records your database has to deal with.


What resources exist for Database performance-tuning?


You should cache as much data as you can on the web server. Data that does not change often like list of Countries, Cities, etc is a good candidate for this. Realistically, how often do you add a country? Even if you change the list, a simple refresh of the cache will handle this.

You should make sure that your queries are tuned properly to make best use of Index and Join techniques.

You may have load on your DB from other queries as well. You may want to look into techniques to improve performance of MySQL databases.


Just get your table to fit in memory, which should be trivial for 70k rows.

Then you can do a scan very easily. Maybe don't even use a sql database for this (as it doesn't change very often), just dump the cities into a text file and scan that. That'd definitely be better if you have many web servers but only one db server as each could keep its own copy of the file.

How many queries per second are you seeing peak? I can't imagine there being that many people typing city names in, even if it is a very busy site.

Also you could cache the individual responses (e.g. in memcached) if you get a good hit rate (e.g. because people tend to type the same things in)


Actually you could also probably precalculate the responses for all one-three letter combinations, that's only 26*26*26 (=17k) entries. As a four or more letter input must logically be a subset of one of those, you could then scan the appropriate one of the 17k entries.


If you have an index on the the city name it should be handled by the database efficiently. This statement is wrong, see comments below

To lower the demands on your server resources you can offer autocompletion only after n more characters. Also allow for some timeout, i.e. don't do a request when a user is still typing.
Once the user stopped typing for a while you can request autocompletion.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜