Hadoop and MySQL Integration
We would like to implement Hadoop on our system to improve its performance.
The process works like this: Hadoop will gather data from MySQL database then process it. The output will then be exported back to MySQL d开发者_Python百科atabase.
Is this a good implementation? Will this improve our system's overall performance? What are the requirements and has this been done before? A good tutorial would really help.
Thanks
Sqoop is a tool designed to import data from relational databases into Hadoop
https://github.com/cloudera/sqoop/wiki/
and a video about it http://www.cloudera.com/blog/2009/12/hadoop-world-sqoop-database-import-for-hadoop/
Hadoop is used for batch based jobs mostly on large sized semi structured data.. Batch in the sense even the shortest jobs is in the order of magnitudes of minutes. What kind of performance problem you are facing? Is it based on data transformations or reporting. Depending on that this architecture may help or make things worse.
As mentioned by Joe, Sqoop is a great tool of the Hadoop ecosystem to import and export data from and to SQL databases such as MySQl.
If you need more complex integration of MySQL including e.g. filtering or tranformation, then you should use an integration framework or integration suite for this problem. Take a look at my presentation "Big Data beyond Hadoop - How to integrate ALL your data" for more information about how to use open source integration frameworks and integration suites with Hadoop.
Altough it is not a regular hadoop usage. It migh make sense in following scenario:
a) If you have good way to partition your data into the inputs (like existing partitioning).
b) The processing of each partition is relatively heavy. I would give the number of at least 10 seconds of CPU time per partition.
If both conditions are met - you will be able to apply any desired amount of CPU power to make your data processing.
If your are doing simple scan or aggregation - I think your will not gain anything. On other hand - if your are going to run some CPU intensive algorithms on each partition - then indeed your gain can be significant.
I would also mention a separate case- if your processing require massive data sorting. I do not think that MySQL will be good in sorting billions of records. Hadoop will do it.
I agree with Sai. I'm using Hadoop with MySql only when needed. I export the table into CSV and upload it to HDFS to process data more quickly. If you want to persist your processed data, you will have to write a single-reducer job that will do some kind of batchinserts to improve the performance of insertion.
BUT that really depends on what kind of things you want to do.
精彩评论