开发者

What types of tasks / applications can use Apache Hadoop for (MapReduce functions)

I don't understand 开发者_运维技巧what types of apps can be used with Hadoop. Does each task have to be tailored for hadoop/mapreduce. For example, can you just associate any long running java processed with it? Or do you specifically have to tailor your app/task for hadoop. I guess a good example would be using lucene and hadoop for indexing.


MapReduce is a processing model; it tells you exactly what your processing task should fit into.

  1. Your processing must be batch oriented
  2. You must be able to transform your work into a (set of) map and reduce steps.
  3. In order to have any advantage of the scalability properties of MapReduce you must be able to split the work into enough independent (!!) pieces that can be processed separately.

Hadoop does (among other things) MapReduce with the added advantage that you can actually run a job reliably on 1000 systems in parallel (if you have enough independent pieces).

Given those constraints: some things cannot be done and a lot of things can be done. Analyzing logfiles (i.e. a large set of independent lines) or even webanalytics (every a single visitor/session did can be processed separately) are amongst the most common applications.

So yes, your task must be transformed to fit in the model for it to work.


Hadoop is really an engine that is for split/combine for processes. You split up a task in to similar sets of data [map] and then you combine the similar sets into a result [reduce/merge].

Its one way of making a parallel application. The maps and reduces are distributed to different nodes within the cluster. Its a very strict division of tasks and what data can be passed between the processes [must be serializable and disconnected to the data in the other maps/reduces]


Basically, you have to be able to 'split' your task into independent tasks.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜