开发者

Using Hadoop to update MySQL

I'm using my reducers to input data into MySQL. My concern is that in some cases, multiple reducers are launched for the same key simultaneously. In that case, there is a chance that the DB could be updated twice by the same data. Is ther开发者_StackOverflow中文版e a way to protect against that?

Would it make sense to turn off autocommit mode in the SQL connection in this case?


You can change setting:

mapred.reduce.tasks.speculative.execution

That would disable speculative execution of reduce tasks if this is you case.

The other way I would suggest is to use Sqoop to write to MySQL: http://archive.cloudera.com/cdh/3/sqoop/SqoopUserGuide.html#_literal_sqoop_export_literal


I think this situation has nothing to done with autocommits. If they are not too much and do not cost a considerable amount of overhead, then ignore them because they won't break up the consistency. All your reducers are doing to execute SQL queries, how can you prevent them to execute queries for the same keys? I think you should solve this issue in your mapreduce function, because this is not a case that DBMS can handle since all it does is to execute given query in the database.


Found the solution...it was turning off speculative execution

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜