How can I optimize remote d/b access?
I doubt that the problem is with "remote", and think it more likely to be with "d/b access" - but, how can I know?
Can/should I optimze my actual d/b structure? maybe adding foreign keys, etc
Or use a round rob开发者_Go百科in d/b to limit the number of records? Or move some rows "offline" on a regular basis?
Maybe I can optimize my SQL (use of inner/outer join, etc)?
Fwiw, the norm is d/b write, which is complained of as being "too slow"; there are very few reads (just one of those cases where you store data "just in case it is needed")
Any advice, web-sites, books? Are there any ValGrind-type tools to measure/profile what is actually happening?
Off the top of my head . . .
- Measure performance. How long does it actually take, and what does your application do during that time? Use a stopwatch if you don't have any other way.
- Compare local performance. How long does the same task take when you access the database locally?
- Which remote access technology are you using? HTTP, VPN, VNC, ssh?
- Which dbms are you using?
- What tables are involved? Post their structure and number of rows.
- How does your application write to the database? Through direct table access, or stored procedures?
- Insert or update?
- Adding foreign keys is not an optimization. Foreign keys are fundamental to data integrity. Add them before you get fired.
- What server-side software are you using? Are you using php, ruby on rails, Django, ASP? What facilities do they offer to log their performance?
精彩评论