开发者

Switch from Mysql to MongoDB 200 millions rows

We are trying to move from mysql to mongodb. mysql structure is id_src int id_dest int unique key : id_src,id_dest

They are about 200 millions rows in mysql

data exemple : {id_src,id_dest} {1,2} {1,3} {1,10} {2,3} {2,10} {4,3}

We need to retrive data : {id_dest,count} {3,3} {10,2} {2,1}

I started to repoduced the structure of mysql in mongodb. Insert performance were huge (very good)开发者_如何学编程 : about 1 hour to insert 200 millions rows.

But I needed to use map reduce to get the group by. Map reduce took about 1 hour.

So I tried to create an other mongodb structure : {id_dest,{id_src1,id_src2}}

Each document can have hundred thousand of id_src.

Here is my insert.php code

$res=mysql_unbuffered_query("select * from ids limit 10000100");  
while ($tab=mysql_fetch_array($res)) {  
$collection->update(array('_id'=>(int)$tab['id_dest']),array('$push' => array('src'=>(int)$tab['id_src'])),array("upsert" => true));  
}  

But in that case performance are very bad, only few update per seconds.

Am I doing something wrong ?


First things first, Map / Reduce is not designed for real-time analysis. Additionally, MongoDB is currently limited to one core for M/R which will slow things down even more.

So if you're going to use M/R to get data, it will not be "real-time" it will be updated every X minutes (or hours).

There are two efficient approaches here:

  1. Incremental M/R
  2. Real-time counters

Option #1: Incremental M/R

For this option you run the M/R once for all of the data. Then, going forward, you only run M/R on the modified data. If you have 200M documents now, you may have 210M documents next (which means that M/R gets even slower). But if you only have to run new/changed documents, then it should take much less than 1 hour.

Take a look at the reduce output option in the docs here.

Again the premise is that you only M/R the relevant data and the system "re-reduces" against the existing data. In this way you get an "incremental" M/R.

Option #2: Real-Time counters

In this method, you have two collections: one for your data, a second for the result of the "summary". When you insert into data, you also do an increment on the summary.

Assume you have this data:

Main Collection
{src: 1, dest: 2}
{src: 1, dest: 3}
{src: 1, dest: 10}
{src: 2, dest: 3}
{src: 2, dest: 10}
{src: 4, dest: 3}

Summary Collection
{dest: 3, count: 3}
{dest: 10, count: 2}
{dest: 2, count: 1}

You receive a new piece of data {src: 5, dest: 2}. You would do two updates:

db.main.insert({src: 5, dest: 2});
db.summary.update({dest: 2}, { $inc : { count: 1 } }, true); // upsert true

Here's your new data:

Main Collection
{src: 1, dest: 2}
...
{src: 4, dest: 3}
{src: 5, dest: 2}

Summary Collection
{dest: 3, count: 3}
{dest: 10, count: 2}
{dest: 2, count: 2}

You'll notice that we've updated our summary: {dest: 2, count: 2}.

Obviously, there are trade-offs here. You need more updates/inserts (2x), but you get real-time counters. Now, there are no transactions in MongoDB, so you'll have to decide on a strategy for ensuring that both updates happen. There are lots of ways to do this which I cannot go into here (see message queues for one method).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜