开发者

Tables with 30mln entries are slow. Optimize MySQL or switch to mongodb?

I have a simple mysql db running on one server with two tables: products and reviews. products table has about 10 million entries and reviews table has about 30 million entries.

The whole Db is about 30Gb. I feel like it's getting slow and I'm wondering what should I do about it. I created indexes, but it didn't help. For example, products table have category field and when I do a simple select * from products where category=2 - it is just slow.

Will switching to mongodb help me in this situation or I can solve this just by optimizing Mysql somehow? In this case should I do sharding or size of tables is not that big and it's possible to optimize the other way?

Tables and my.cnf

CREATE TABLE IF NOT EXISTS `products` (
  `id` int(11) NOT NULL auto_increment,
  `product_title` varchar(1000) NOT NULL,
  `product_id` varchar(100) NOT NULL,
  `title` varchar(1000) NOT NULL,
  `image` varchar(1000) NOT NULL,
  `url` varchar(1000) NOT NULL,
  `price` varchar(100) NOT NULL,
  `reviews` int(11) NOT NULL,
  `stars` float NOT NULL,
  `BrowseNodeID` int(11) NOT NULL,
  `status` varchar(100) NOT NULL,
  `started_at` int(15) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `id_index` (`BrowseNodeID`),
  KEY `status_index`开发者_JS百科 (`status`),
  KEY `started_index` (`started_at`),
  KEY `id_ind` (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=13743335 ;


CREATE TABLE IF NOT EXISTS `reviews` (
  `id` int(11) NOT NULL auto_increment,
  `product_id` varchar(100) NOT NULL,
  `product_title` varchar(1000) NOT NULL,
  `review_title` varchar(1000) NOT NULL,
  `content` varchar(5000) NOT NULL,
  `author` varchar(255) NOT NULL,
  `author_profile` varchar(1000) NOT NULL,
  `stars` float NOT NULL,
  `owner` varchar(100) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `product_id` (`product_id`),
  KEY `id_index` (`product_id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=48129737 ;

Some info from my.cnf

set-variable    = query_cache_size=1512M
set-variable    = thread_cache_size=8
thread_concurrency = 8
skip-innodb
low-priority-updates
delay-key-write=ALL

key_buffer_size = 100M
max_allowed_packet = 1M
#table_open_cache = 4048
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 50M
set-variable    = table_cache=256
set-variable    = query_cache_limit=1024M
set-variable    = query_cache_size=1024M


Based on your my.cnf, it looks as though your key_buffer_size is way too small, so you're going to disk for every read. Ideally, that value should be set larger than the total size of your MyISAM indexes.

Before you go changing DB technologies, you may also want to consider changing your table type to InnoDB. Your my.cnf has it disabled, right now. I've gotten pretty stellar performance out of a 300M row table with smart indexes and enough memory. InnoDB will also give you some leeway with longer running reads, as they won't lock your entire table.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜