Is Hadoop going to give me more benefits in my case? [closed]
I'm using Clojure to pull ten XML files hourly, each file is about 10 MB. This scrip开发者_运维技巧t is running on a server machine.
XML files are parsed and stored into RDBMS right now (all is done using native Clojure code).Considering my case, Am I going to gain more benefits if I used Hadoop Map/Reduce to parse the XML files? or it will be overkill?
Using Hadoop would definitely be an overkill in this case. If you were to use Hadoop to parse 10 files, parallely,
- It would spawn 10 JVMs from each Map task
- It could spawn 1 more JVM for the reduce task (ofcourse you could have a map only hadoop job where u wont need a reduce phase)
- There would be a shuffle stage between the Map and Reduce phase where all Map output is sent across the network to the reduce node
If your files are each a max of 10 Mb, then I dont see much advantage and you will infact incur significant overhead from the JVM starts and excessive IO.
Id say you should consider Hadoop once you cross 100 - 150 Mb per file
I have two clojure examples that you could use for comparison:
- 1 application parsing thousands of xml files each around 1Mb or less, and processing is around 50ms each under normal load.
- 1 other application doing processing on relatively big log files, each of 50mb-100mb, and processing is around 1-2seconds each.
Of course, this depends on the server processing power, but everything is done in clojure, without any hint of a bottleneck.
精彩评论